×
Register Here to Apply for Jobs or Post Jobs. X

AI Platform Engineer - Databricks Mexico

Job in Roswell, Chaves County, New Mexico, 88202, USA
Listing for: NEORIS
Full Time position
Listed on 2026-02-12
Job specializations:
  • IT/Tech
    Cloud Computing, Data Engineer
Salary/Wage Range or Industry Benchmark: 250000 USD Yearly USD 250000.00 YEAR
Job Description & How to Apply Below

NEORIS is a digital accelerator that helps companies enter the future, with 20 years of experience as digital partners for some of the world's largest companies. We are more than 4,000 professionals in 11 countries, with our multicultural startup culture where we cultivate innovation and continuous learning to create high-value solutions for our clients.

We are looking for AI Platform Engineer
- Databricks

.

Main responsibilities
  • Design and implement scalable Databricks platform solutions to support analytics, ML, and GenAI workflows across environments (dev/test/prod).
  • Administer and optimize Databricks work spaces: cluster policies, pools, job clusters vs. all-purpose clusters, autoscaling, spot/fleet usage, and GPU/accelerated compute where applicable.
  • Implement Unity Catalog governance: define metastores, catalogs, schemas, data sharing, row/column masking, lineage, and access controls; integrate with enterprise identity and audit.
  • Build IaC for reproducible platform provisioning and configuration using Terraform; manage config-as-code for cluster policies, jobs, repos, service principals, and secret scopes.
  • Implement CI/CD for notebooks, libraries, DLT pipelines, and ML assets; automate testing, quality gates, and promotion across work spaces using Git Hub Actions and Databricks APIs.
  • Standardize experiment structure, implement model registry workflows, and deploy/operate model serving endpoints with monitoring and rollback.
  • Develop and optimize Delta Lake pipelines (batch and streaming) using Auto Loader, Structured Streaming, and DLT; enforce data quality and SLAs with expectations and alerts
Requirements
  • Proficient in cloud operations on AWS, with strong understanding of scaling infrastructure and optimizing cost/performance.
  • Proven hands‑on experience with Databricks on AWS: workspace administration, cluster and pool management, job orchestration (Jobs/Workflows), repos, secrets, and integrations.
  • Strong experience with Databricks Unity Catalog: metastore setup, catalogs/schemas, data lineage, access control (ACLs, grants), attribute‑based access control, and data governance.
  • Expertise in Infrastructure as Code for Databricks and AWS using Terraform (databricks and aws providers) and/or AWS Cloud Formation; experience with Databricks asset bundles or CLI is a plus.
  • Experience implementing CI/CD and Git Ops for notebooks, jobs, and ML assets using Git Hub and Git Hub Actions (or Git Lab/Jenkins), including automated testing and promotion across work spaces.
  • Ability to structure reusable libraries, package and version code, and enforce quality via unit/integration tests and linting. Proficiency with SQL for Lakehouse development.
  • Experiment tracking, model registry, model versioning, approval gates, and deployment to batch/real‑time endpoints (Model Serving).
  • AWS IAM/STS, Private Link/VPC, KMS encryption, Secrets, SSO/SCIM provisioning, and monitoring/observability (Cloud Watch/Datadog/Grafana).
  • Experience with Dev Ops practices to enable automation strategies and reduce manual operations.
  • Experience or awareness of MLOps practices; building pipelines to accelerate and automate machine learning will be viewed favorably.
  • Excellent communication, cross‑functional collaboration, and stakeholder management skills.
  • Detail‑oriented, proactive, able to work independently and within a distributed team.
  • Hands‑on Databricks administration on AWS, including Unity Catalog governance and enterprise integrations.
  • Strong AWS foundation: networking (VPC, subnets, SGs), IAM roles and policies, KMS, S3, Cloud Watch; EKS familiarity is a plus but not required for this Databricks‑focused role.
  • Proficiency with Terraform (including databricks provider), Git Hub, and Git Hub Actions.
  • Strong Python and SQL; experience packaging libraries and working with notebooks and repos.
  • Experience with MLflow for tracking and model registry; experience with model serving endpoints preferred.
  • Familiarity with Delta Lake, Auto Loader, Structured Streaming, and DLT.
  • Experience implementing Dev Ops automation and runbooks; comfort with REST APIs and Databricks CLI.
  • Git and Git Hub proficiency; code review and branching strategies
We offer
  • 100% Nominal Program
  • Statutory Benefits
  • Food Stamps
  • Additional Benefits
  • Wellness Program
  • Professional Development Plan

We invite you to get to know us at , Facebook, Linked In, Twitter, or Instagram: @NEORIS.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary