×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Platform Engineer

Job in Tempe, Maricopa County, Arizona, 85285, USA
Listing for: Interactive Resources - iR
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

We are seeking a seasoned Databricks Data Engineer with deep expertise in Azure cloud services and the Databricks Lakehouse platform
. In this role, you will design and optimize large-scale data pipelines, modernize cloud-based data ecosystems, and enable secure, governed data solutions that support analytics, reporting, and advanced data use cases.

This is an opportunity to work on complex, enterprise-scale data platforms while collaborating with cross-functional teams to deliver reliable, scalable, and high-quality data products.

What You Will Do
  • Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform with a focus on reliability, scalability, and governance
  • Modernize an Azure-based data ecosystem, contributing to cloud architecture, distributed data engineering, data modeling, security, and CI/CD automation
  • Develop and optimize ETL/ELT pipelines using Python, PySpark, Spark SQL, and Databricks notebooks
  • Design and optimize Delta Lake data models for performance, scalability, and data quality
  • Implement and manage Unity Catalog for role-based access control, lineage, governance, and secure data sharing
  • Build reusable frameworks using Databricks Workflows, Repos, and Delta Live Tables
  • Create scalable ingestion pipelines for APIs, databases, files, streaming sources, and master data systems
  • Automate API ingestion and workflows using Python and REST APIs
  • Utilize Apache Airflow or similar tools for orchestration and workflow automation
  • Support data governance, metadata management, cataloging, and lineage initiatives
  • Enable downstream consumption for BI, analytics, data science, and application workloads
  • Write and optimize complex SQL/T‑SQL queries, stored procedures, and curated datasets
  • Automate deployments, Dev Ops workflows, testing pipelines, and workspace configurations
Required Experience
  • 8+ years of experience designing and developing scalable data pipelines in modern data warehousing or lakehouse environments
  • Strong end-to-end ownership of data engineering solutions, from design through production support
  • Advanced proficiency in SQL, Python, and ETL/ELT frameworks
    , including performance tuning and optimization
  • Hands‑on experience with Azure, Databricks
    , and cloud-based data platforms
  • Experience integrating data platforms with enterprise systems and downstream analytics tools
  • Ability to lead and coordinate data initiatives across cross‑functional and matrixed teams
  • Experience working with regulated or highly governed datasets is a plus
  • Competitive compensation and comprehensive benefits package
  • 401(k) and health insurance options
  • Collaborative, supportive team environment focused on technical excellence
  • Opportunities for professional development, training, and long‑term career growth
  • Tuition reimbursement for qualified education and certification expenses
#J-18808-Ljbffr
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary