×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer; AWS Databricks

Job in 500001, Hyderabad, Telangana, India
Listing for: Blend
Full Time position
Listed on 2026-02-17
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing, Data Science Manager, Data Security
Job Description & How to Apply Below
Position: Senior Data Engineer (AWS Databricks)
We are seeking a Data Engineer to join our team, focused on data platform integration and pipeline engineering on Databricks. This role will work closely with AI Engineers to enable model development and deployment by building secure, reliable, and scalable data pipelines and integrations.
The role is not focused on data transformation or analytics modelling. Instead, it concentrates on ingestion, orchestration, connectivity, and operational pipelines that support AI and advanced analytics workloads within Databricks environments deployed into client-managed accounts.

Experience

Required:

5+ yrs

Key Responsibilities
Databricks-Centric Data Engineering
Build and maintain data pipelines that ingest data into Databricks on AWS.
Configure and manage Databricks jobs and workflows to support AI workloads.
Integrate Databricks with upstream source systems and downstream AI services.
Ensure data is accessible and performant for AI training and inference use cases.
Pipeline Engineering (Non-Transformational)
Design and implement pipelines focused on:
Data ingestion
Data movement
Orchestration and scheduling
Develop pipelines using Python and Databricks-native tooling (e.g. Databricks Jobs, workflows).
Ensure pipelines are production-ready with monitoring, logging, and alerting.
AWS Environment Integration
Work within client-owned AWS environments, collaborating with Dev Ops engineers on infrastructure provisioning.
Integrate Databricks pipelines with cloud services such as:
Ensure pipelines align with client security and governance requirements.
Security, Governance & Compliance
Build and operate pipelines that meet SOC 1 compliance requirements, including:
Access controls and permissions
Audit logging and traceability
Controlled deployment processes
Support data governance standards within Databricks environments.
Delivery & Operations
Deploy pipeline code via Git Hub-based CI/CD pipelines.
Support operational monitoring and incident response for data pipelines.
Document pipeline designs, dependencies, and operational processes.

Required Skills & Experience
Core Technical Skills
Strong experience as a Data Engineer, with a focus on Databricks-based platforms.
Hands-on experience with Databricks on AWS, including:
Jobs and workflows
Cluster configuration (user-level understanding)
Proficiency in Python for pipeline development.
Experience using Git Hub and CI/CD workflows.
Engineering & Delivery
Experience building production-grade ingestion and orchestration pipelines.
Ability to work effectively in client-facing delivery environments.
Strong documentation and collaboration skills.

Nice to Have

Experience with Databricks Unity Catalog.
Exposure to event-driven or streaming data architectures.
Familiarity with MLOps concepts and AI lifecycle support.
Position Requirements
10+ Years work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary