Senior AI/ML Engineer
Listed on 2026-01-16
-
Software Development
AI Engineer, Machine Learning/ ML Engineer
Position:
Senior AI/ML Engineer
Location:
San Ramon, CA / Redwood City, CA (2-3 days onsite)
We are seeking a Senior AI/ML Engineer to design, build, and deploy large-scale, production-grade machine learning systems powering our Personal AI & Human Connection intelligence platform. You will architect and implement end‑to‑end ML pipelines, from data ingestion and feature engineering to model training, deployment, and monitoring ensuring reliability, scalability, and compliance with Personal data standards.
The ideal candidate combines strong software engineering fundamentals with deep experience in machine learning, MLOps, and cloud‑native systems (AWS/Google Cloud Platform). You will work closely with product, data, and executive teams to translate real‑world challenges into impactful AI solutions for our users.
Responsibilities- Design and implement end‑to‑end ML pipelines data ingestion, preprocessing, Aka, model training, validation, and deployment.
- Develop and product ionize ML models for prediction, classification, and recommendation tasks across complex workflows.
- Build automated retraining and evaluation pipelines using tools frío AWS Sage Maker, Vertex AI, Kubeflow, or MLflow.
- Develop and maintain feature stores and data transformation pipelinesناء Spark, PyTorch, or Tensor Flow.
- Deploy and serve models via REST/gRPC endpoints or Lambda‑based inference APIs with high availability and low latency.
- Collaborate with backend engineers to integrate models into production microservices and event‑driven workflows.
- Implement model observability and drift detection, logging key metrics (accuracy, latency, bias, cost).
- Ensure all pipelines and models meet security, privacy, and compliancezing standards (HIPAA, SOC
2). - Write clean, testable code; create unit, integration, and performance tests for all critical ML components.
- Partner with product and data science teams to refine model goals, define success metrics, and optimize inference performance.
- Bachelor's or Master’s degree in Computer Science, Machine Learning, or a related field.
- 6 years of hands‑on experience building, training, and deploying ML models in production.
- Expert in Python and ML libraries/frameworks:
PyTorch, Tensor Flow, scikit‑learn. - Strong experience with AWS AI/ML stack (S3, Sage Maker, Lambda, Step Functions, ECS, Dynamo
DB, RDS). - Proficiency in data pipelines (Airflow, Spark, Glue) and feature engineering for large, structured/unstructured datasets.
- Familiarity with Docker/Kubernetes and deploying ML services via CI/CD.
- Deep understanding of ML lifecycle management and versioning (MLflow, DVC, Weights & Biases).
- Knowledge of prompt engineering, RAG pipelines, and LLM integration is a plus.
- Experience handling sensitive data (PII), with strong focus on security, encryption, and compliance.
- Excellent debugging and optimization skills across model performance, latency, and infrastructure cost.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).