More jobs:
Job Description & How to Apply Below
About KPMG in India
KPMG entities in India are professional services firm(s). These Indian member firms are affiliated with KPMG International Limited. KPMG was established in India in August 1993. Our professionals leverage the global network of firms, and are conversant with local laws, regulations, markets and competition. KPMG has offices across India in Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Jaipur, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara and Vijayawada.
KPMG entities in India offer services to national and international clients in India across sectors. We strive to provide rapid, performance-based, industry-focused and technology-enabled services, which reflect a shared knowledge of global and local industries and our experience of the Indian business environment.
Responsibilities
We're hiring a Spark Scala Developer who has real-world experience working in Big Data environments, both on-prem and/or in the cloud. You should know how to write production-grade Spark applications, fine-tune performance, and work fluently with Scala's functional style.
Experience with cloud platforms and modern data tools like Snowflake or Databricks is a strong plus.
Your Responsibilities
Design and develop scalable data pipelines using Apache Spark and Scala
Optimize and troubleshoot Spark jobs for performance (e.g. memory management, shuffles, skew)
Work with massive datasets in on-prem Hadoop clusters or cloud platforms like AWS/GCP/Azure
Write clean, modular Scala code using functional programming principles
Collaborate with data teams to integrate with platforms like Snowflake, Databricks, or data lakes
Ensure code quality, documentation, and CI/CD practices are followed
Must-Have Skills
3+ years of experience with Apache Spark in Scala
Deep understanding of Spark internals—DAG, stages, tasks, caching, joins, partitioning
Hands-on experience with performance tuning in production Spark jobs
Proficiency in Scala functional programming (e.g. immutability, higher-order functions, Option/Either)
Proficiency in SQL
Experience with any major cloud platform: AWS, Azure, or GCP
Good-to-Have
Worked with Databricks, Snowflake, or Delta Lake
Exposure to data pipeline tools like Airflow, Kafka, Glue, or Big Query
Familiarity with CI/CD pipelines and Git-based workflows
Comfortable with SQL optimization and schema design in distributed environments
Qualifications
B.Tech/ B.E./ Post Graduation
Position Requirements
10+ Years
work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×