×
Register Here to Apply for Jobs or Post Jobs. X

AI​/ML Supply Chain Engineer

Job in San Diego, San Diego County, California, 92189, USA
Listing for: QuidelOrtho
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    Data Engineer, AI Engineer
Salary/Wage Range or Industry Benchmark: 100000 - 125000 USD Yearly USD 100000.00 125000.00 YEAR
Job Description & How to Apply Below

Overview

Quidel Ortho unites the strengths of Quidel Corporation and Ortho Clinical Diagnostics, creating a world-leading in vitro diagnostics company with award-winning expertise in immunoassay and molecular testing, clinical chemistry and transfusion medicine. We are more than 6,000 strong and do business in over 130 countries, providing answers with fast, accurate and consistent testing where and when they are needed most - home to hospital, lab to clinic.

Our culture puts our team members first and prioritizes actions that support happiness, inspiration and engagement. We strive to build meaningful connections with each other as we believe that employee happiness and business success are linked. Join us in our mission to transform the power of diagnostics into a healthier future for all.

The Role

At Quidel Ortho, we re advancing the power of diagnostics for a healthier future for all. Join our mission as our next AI/ML Supply Chain Engineer. You will be responsible for designing, building, and optimizing data pipelines and infrastructure using Databricks to support AI and machine learning (ML) initiatives. This role will involve working closely with business stakeholders to identify high-value AI/ML use cases and translating business requirements into technical solutions.

The engineer will work to ensure the successful deployment of AI/ML solutions at scale, leveraging Azure services and Databricks tools.

This position will be working hybrid out of our San Diego, CA - Summers Ridge, HQ office.

Responsibilities
  • Work directly with business stakeholders to identify and define AI/ML use cases, translating business needs into technical requirements.
  • Design, develop, and optimize scalable data pipelines in Databricks for AI/ML applications, ensuring efficient data ingestion, transformation, and storage.
  • Build and manage Apache Spark-based data processing jobs in Databricks, ensuring performance optimization and resource efficiency.
  • Implement ETL/ELT processes and orchestrate workflows using Azure Data Factory, integrating various data sources such as Azure Data Lake, Blob Storage, and Microsoft Fabric.
  • Collaborate with Data Engineering teams to meet data infrastructure needs for model training, tuning, and deployment within Databricks and Azure Machine Learning.
  • Monitor, troubleshoot, and resolve issues within Databricks workflows, ensuring smooth operation and minimal downtime.
  • Implement best practices for data security, governance, and compliance within Databricks and Azure environments.
  • Automate data and machine learning workflows using CI/CD pipelines through Azure Dev Ops.
  • Maintain documentation of workflows, processes, and best practices to ensure knowledge sharing across teams.
  • Perform other work-related duties as assigned.
The Individual

Required:

  • This position is not currently eligible for visa sponsorship.
  • Bachelor s degree in computer science, Engineering, or a related field (or equivalent experience).
  • 3+ years of experience in data engineering, with a strong focus on Databricks and AI/ML applications.
  • Proven experience working directly with business stakeholders to identify and implement AI/ML use cases.
  • Expertise in Apache Spark and hands-on experience with Databricks for building and optimizing data pipelines.
  • Strong programming skills in Python and Scala for data engineering and machine learning workflows in Databricks.
  • Experience with Azure Data Factory, Azure Data Lake, Azure Blob Storage, and Azure Synapse Analytics.
  • Proficiency with Databricks Delta Lake for data reliability and performance optimization.
  • Familiarity with MLflow and Databricks Runtime for Machine Learning for model management and deployment.
  • Knowledge of Azure Dev Ops for implementing CI/CD pipelines in Databricks-based projects.
  • Strong understanding of data governance, security practices, and compliance requirements in cloud environments.
  • Familiarity with emerging Databricks features such as Delta Live Tables and Unity Catalog.
Preferred
  • Experience with real-time data processing using Apache Kafka or Azure Event Hubs.
The Key Working Relationships

Internal Partners:

  • Regular collaboration with business stakeholders to identify and…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary