×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer; PySpark

Job in Bengaluru, 560001, Bangalore, Karnataka, India
Listing for: Virtusa
Full Time position
Listed on 2026-03-07
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Salary/Wage Range or Industry Benchmark: 500000 - 800000 INR Yearly INR 500000.00 800000.00 YEAR
Job Description & How to Apply Below
Position: Data Engineer (PySpark)
Location: Bengaluru

Role:
Data Engineer
Key Skill:  Pyspark, Cloudera Data Platform, Big data – Hadoop, Hive, Kafka
Responsibilities
Data Pipeline Development:
Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.
Data Ingestion:
Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
Data Transformation and Processing:
Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.
Performance Optimization:
Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
Data Quality and Validation:
Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.
Automation and Orchestration:
Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.
Technical Skills
3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform
PySpark:
Advanced proficiency in PySpark, including working with RDDs, Data Frames, and optimization techniques.
Cloudera Data Platform:
Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.
Data Warehousing:
Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).
Big Data Technologies:
Familiarity with Hadoop, Kafka, and other distributed computing tools.
Orchestration and Scheduling:

Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
Scripting and Automation:
Strong scripting skills in Linux.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary