×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer – AWS Databricks

Job in Santa Clara, Santa Clara County, California, 95053, USA
Listing for: novasoft
Full Time position
Listed on 2026-02-21
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing, Big Data, Data Science Manager
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

Location: Santa Clara, CA
Experience: 10+ Years
Employment Type: Full-Time / Contract

Position Overview:

We are seeking a highly experienced AWS Databricks Data Engineer to join our data engineering team in Santa Clara. The ideal candidate will have deep expertise in Databricks, AWS, PySpark, SQL, and large-scale data pipeline development
. This role focuses on designing and optimizing modern cloud-based data platforms that support analytics, BI, and enterprise reporting use cases.

You will collaborate with cross-functional teams and business stakeholders to deliver scalable, secure, and high-performance data solutions built on a Lakehouse architecture.

Key Responsibilities:
  • Design and maintain scalable ETL/ELT pipelines using Databricks on AWS
  • Develop high-performance data transformations using PySpark and SQL
  • Implement and optimize Lakehouse (Medallion) architecture for batch data processing
  • Integrate data from S3, databases, and AWS-native services
  • Optimize Spark workloads for performance, cost, and scalability
  • Implement data governance and access controls using Unity Catalog
  • Deploy and manage jobs using Databricks Workflows and CI/CD pipelines
  • Collaborate with business and analytics teams to deliver reliable, production-ready datasets
Required Technical

Skills:
  • Strong expertise in Databricks
    :
    • Delta Lake
    • Unity Catalog
    • Lakehouse Architecture
    • Workflows
    • Delta Live Pipelines
    • Table Triggers
    • Databricks Runtime
  • Advanced proficiency in PySpark and SQL
  • Experience designing and rebuilding batch-heavy data pipelines
  • Strong knowledge of Medallion Architecture
  • Expertise in performance tuning and Spark optimization
  • Experience with Databricks Workflows & orchestration
  • Familiarity with Genie enablement concepts (working understanding required)
  • Experience with CI/CD and Git-based development
  • Strong AWS fundamentals:
    • IAM
    • Networking basics
    • S3
    • Glue Catalog
Preferred Qualifications:
  • Experience with Spark Structured Streaming
  • Knowledge of real-time or near real-time data solutions
  • Advanced Databricks Runtime configurations
  • Experience with Git Lab CI/CD pipelines
  • Exposure to scalable enterprise data architectures
Certifications (Optional)
  • Databricks Certified Data Engineer (Associate/Professional)
  • AWS Data Engineer or AWS Solutions Architect Certification
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary