×
Register Here to Apply for Jobs or Post Jobs. X

Senior Software Engineer, Platform - Data + AI; Back-End

Job in Redwood City, San Mateo County, California, 94061, USA
Listing for: C3 AI
Full Time position
Listed on 2026-01-01
Job specializations:
  • Software Development
    AI Engineer, Machine Learning/ ML Engineer
Job Description & How to Apply Below
Senior Software Engineer, Platform - Data + AI (Back-End)

Apply for the Senior Software Engineer, Platform - Data + AI (Back-End) role at C3 AI.

C3 AI (NYSE: AI) is the Enterprise AI application software company. C3 AI delivers a family of fully integrated products including the C3 Agentic AI Platform, an end‑to‑end platform for developing, deploying, and operating enterprise AI applications, C3 AI applications, a portfolio of industry‑specific SaaS enterprise AI applications that enable the digital transformation of organizations globally, and C3 Generative AI, a suite of domain‑specific generative AI offerings for the enterprise.

C3 AI is looking for Senior Software Engineers to join the rapidly growing Data org within the Platform Engineering department. Successful candidates will get the opportunity to work on high‑value technologies at the intersection of large‑scale distributed systems, data infrastructure, and machine learning. You will design, develop, and maintain various features in a highly scalable and extensible AI/ML platform for large‑scale applications, involving data science, distributed systems, and multi‑cloud strategy.

You will be given opportunities to take ownership of components, collaborate to drive technical direction, and work on interesting, impactful projects. Join us in building the next‑generation AI/ML platform at petabyte level scale that powers some of the world’s largest companies in Energy, Financial Services, Utilities, Health Care, Aerospace, Defense, etc. Accelerate your career in the leading enterprise AI company that is in a hyper‑growth trajectory.

Responsibilities

• Design and develop infrastructure and services to enable data pipelines for petabyte level scale and more.

• Design and develop abstractions over data stores such as Cassandra, Postgre

SQL, Snowflake, etc.

• Design and develop file system abstractions over AWS S3, Azure Blobs, HDFS, etc.

• Design and develop connectors to various external data stores.

• Design and develop distributed system components for stream processing, queueing, batch processing, analytics engines, etc.

• Develop and maintain industry‑leading, high‑performance APIs for AL/ML applications.

• Develop and maintain features for distributed computations over large‑scale data for ML workflows.

• Design and develop ML‑specific data‑systems such as feature stores and behavioral frameworks such as recommendation engines.

• Design and develop integrations with distributed computing technologies such as Apache Spark, Ray, etc. for data exploration and ML workload orchestration.

• Design and develop integrations with data analysis libraries such as Pandas, Koalas, etc.

• Develop and production AI/ML models for failure prediction, data schema inferencing, etc.

• Work on frameworks for performance, scalability, and reliability tracking over different components of a highly extensible AI/ML platform.

• Work with architects, product managers, and software engineers across teams in a highly collaborative environment.

• Participate and provide insights in technical discussions.

• Write clean code following a test‑driven methodology.

• Deliver commitments promptly following agile software development methodology.

Qualifications

• Bachelor of Science in Computer Science, Computer Engineering, or related fields.

• Strong understanding of Computer Science fundamentals.

• High proficiency in coding with Java, C++, C#, or some other compiled language. Python would also be acceptable.

• Strong competency in object‑oriented programming, data structures, algorithms, and software design patterns.

• Experience with version control systems such as Git.

• Experience with large‑scale distributed systems.

• Experience with any public cloud platform (AWS, Azure, GCP).

• Some familiarity with distributed computing technologies (e.g., Hadoop, Spark, Kafka). Familiarity with managed versions of these technologies on public cloud platforms is also acceptable.

• Familiarity with technologies in the modern data science/analysis and engineering ecosystem (e.g., Pandas, Koalas).

• Good verbal and written technical communication ability to facilitate collaboration.

• Thrive in a fast‑paced,…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary