×
Register Here to Apply for Jobs or Post Jobs. X

Sr Data Engineer

Job in Bengaluru, 560001, Bangalore, Karnataka, India
Listing for: ORMAE
Full Time position
Listed on 2026-02-23
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Job Description & How to Apply Below
Location: Bengaluru

Senior Data Engineer

Location:

Bangalore (On-site)

Experience:

4+ Years

Employment Type:

Full-Time

Role Overview

We are seeking a Senior Data Engineer with strong hands-on expertise in building scalable data platforms, modern ingestion pipelines, and high-performance data transformation workflows on Azure and Databricks.

The ideal candidate should have deep experience in distributed data processing, orchestration, CI/CD-driven data engineering, and delivering production-grade data solutions that support analytics, AI/ML, and business decision-making.

Key Responsibilities

- Design, build, and maintain scalable data ingestion pipelines for structured and unstructured data sources.
- Develop and optimize ETL/ELT workflows using PySpark, Python, and Databricks.
- Implement complex data transformations, data cleansing, and enrichment processes.
- Manage and optimize Databricks clusters, jobs, and performance tuning.
- Work with Azure Storage Accounts, data lakes, and cloud-native data architectures.
- Build robust data solutions using SQL and advanced query optimization techniques.
- Develop and integrate data services using FastAPI and REST-based interfaces when required.
- Design high-performance data models and optimize database queries for large-scale datasets.
- Implement CI/CD pipelines for data engineering workflows using modern Dev Ops practices.
- Collaborate with Data Scientists, Architects, and Product teams to deliver reliable data products.
- Ensure data quality, governance, monitoring, and operational excellence across pipelines.
- Troubleshoot production issues and improve pipeline reliability and scalability.

Required Skills & Experience

Technical Skills

- Strong experience in Azure Cloud services (Storage Accounts, Data Lake concepts).
- Hands-on expertise with Databricks and cluster management.
- Advanced proficiency in Python and PySpark.
- Experience building large-scale data ingestion pipelines.
- Strong understanding of ETL/ELT architectures.
- Advanced SQL and database query optimization skills.
- Experience implementing CI/CD pipelines for data workflows.
- API development/integration experience using FastAPI.
- Strong understanding of distributed data processing and performance tuning.

Engineering Practices

- Data modeling and schema design.
- Scalable pipeline architecture.
- Logging, monitoring, and observability.
- Version control and automated deployments.
- Performance and cost optimization on cloud platforms.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary