×
Register Here to Apply for Jobs or Post Jobs. X

AI​/ML Architect Databricks

Job in Los Angeles, Los Angeles County, California, 90079, USA
Listing for: Data Freelance Hub
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    Data Engineer, Machine Learning/ ML Engineer, Data Science Manager, AI Engineer
Salary/Wage Range or Industry Benchmark: 125000 - 150000 USD Yearly USD 125000.00 150000.00 YEAR
Job Description & How to Apply Below
Position: AI/ML Architect with Databricks

Overview

We are seeking a skilled AI/ML Architect with hands-on experience in Databricks to join our team. The ideal candidate has strong analytical capabilities, experience building scalable data pipelines and machine learning models, and the ability to collaborate with cross-functional teams to drive data-driven decision-making. This role involves working with large datasets, advanced analytics, and modern data engineering and ML frameworks—primarily using Databricks on Azure/AWS.

Required Qualifications
  • Bachelor’s degree or higher in Computer Science, Data Science, Mathematics, Statistics, Engineering, or related field.
  • 3+ years of experience in data science or machine learning roles.
  • Advanced knowledge of Databricks, including:
  • PySpark / Spark SQL
  • Databricks notebooks
  • Delta Lake
  • MLflow
  • Databricks Jobs & Workflows
  • Strong programming skills in Python (pandas, numpy, scikit-learn).
  • Experience working with large-scale data processing.
  • Solid understanding of machine learning algorithms and statistical techniques.
Key Responsibilities
  • Develop, train, and optimize machine learning and statistical models using Databricks, Python, PySpark, and MLflow.
  • Perform exploratory data analysis (EDA) to identify trends, patterns, and insights in large datasets.
  • Deploy ML models into production using Databricks MLflow, Delta Live Tables, or other MLOps pipelines.
  • Conduct A/B testing, forecasting, segmentation, anomaly detection, or recommendation systems as required by the business.
  • Build scalable, high-performance ETL/ELT pipelines using PySpark, SQL, and Databricks workflows.
  • Work with Delta Lake to ensure high-quality, reliable, and performant data.
  • Optimize cluster usage and job performance within the Databricks environment.
  • Collaborate with data engineers to ensure high-quality data availability for modeling.
  • Translate business problems into analytical solutions and present findings to non-technical stakeholders.
  • Partner with product, engineering, and business teams to drive data-informed decisions.
  • Communicate complex statistical concepts in a clear and concise manner.
Preferred
  • Experience deploying models in production using MLOps frameworks.
  • Knowledge of Azure Databricks or AWS Databricks environments.
  • Understanding of CICD pipelines and Dev Ops concepts (Azure Dev Ops, Git Hub Actions, etc.)
  • Familiarity with deep learning frameworks (Tensor Flow, PyTorch) is a plus.
Key Competencies
  • Strong analytical and problem-solving skills
  • Ability to work in a fast-paced, collaborative environment
  • Excellent communication and presentation skills
  • Self-driven with high attention to detail
Job Details

Job Type: Full-time, Contract

Pay: $ - $ per year

Work Location:

In person

Freelance data hiring powered by an engaged, trusted community — not a CV database.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary