×
Register Here to Apply for Jobs or Post Jobs. X

AWS Databricks Data Engineer

Job in Los Angeles, Los Angeles County, California, 90079, USA
Listing for: E-Solutions
Full Time position
Listed on 2026-02-18
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

We are seeking a highly skilled AWS Data Engineer with strong expertise in SQL, Python, PySpark, Data Warehousing, and Cloud-based ETL to join our data engineering team. The ideal candidate will design, implement, and optimize large-scale data pipelines, ensuring scalability, reliability, and high performance. This role requires close collaboration with cross-functional teams and business stakeholders to deliver modern, efficient data solutions.

Key Responsibilities
  • Build and maintain scalable ETL/ELT pipelines using Databricks on AWS.
  • Leverage PySpark/Spark and SQL to transform and process large, complex datasets.
  • Integrate data from multiple sources including S3, relational/non-relational databases, and AWS-native services.
  • Partner with downstream teams to prepare data for dashboards, analytics, and BI tools.
  • Work closely with business stakeholders to understand requirements and deliver tailored, high‑quality data solutions.
3. Performance & Optimization
  • Optimize Databricks workloads for cost, performance, and efficient compute utilization.
  • Monitor and troubleshoot pipelines to ensure reliability, accuracy, and SLA adherence.
  • Apply query optimization, Spark tuning, and shuffle minimization best practices when handling tens of millions of rows.
4. Governance & Security
  • Implement and manage data governance, access control, and security policies using Unity Catalog.
  • Ensure compliance with organizational and regulatory data‑handling standards.
5. Deployment & Dev Ops
  • Use Databricks Asset Bundles for deployment of jobs, notebooks, and configuration across environments.
  • Maintain effective version control of Databricks artifacts using Git Lab or similar tools.
  • Use CI/CD pipelines to support automated deployments and environment setups.
Technical Skills (Required)
  • Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Workflows, Delta Live Pipelines, Databricks Runtime, etc.).
  • Proven ability to implement robust PySpark solutions.
  • Hands‑on experience with Databricks Workflows & orchestration.
  • Solid knowledge of Medallion Architecture (Bronze/Silver/Gold).
  • Strong background in query optimization, performance tuning, and Spark shuffle optimization.
  • Ability to handle and process tens of millions of records efficiently.
  • Familiarity with Genie enablement concepts (understanding required; deep experience optional).
  • Experience with CI/CD, environment setup, and Git-based development workflows.
  • Solid understanding of AWS cloud, including:
    • IAM
Preferred Experience
  • Experience with Databricks Runtime configurations and advanced features.
  • Knowledge of streaming frameworks such as Spark Structured Streaming.
  • Exposure to Git Lab pipelines or similar CI/CD systems.
Certifications (Optional)
  • AWS Data Engineer or AWS Solutions Architect certification
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary