More jobs:
Job Description & How to Apply Below
Your role is of key importance, as it lays down the foundation for the entire project.
Role
- Big Data Engineer
Location - Pune
Experience - 4+ years
Please find the job description below:
About the Role:
We are looking for an experienced Big Data Engineer with proven hands-on expertise in Apache Spark . The role involves designing, developing, and optimizing large-scale data processing pipelines. Proficiency in Scala is preferred; however, experience in any JVM-based language for Spark development is acceptable.
Key Responsibilities:
Design, develop, and maintain data processing pipelines using Apache Spark.
Optimize Spark jobs for performance and scalability in distributed environments.
Work with large datasets to perform ETL (Extract, Transform, Load) operations.
Collaborate with data scientists, analysts, and engineers to deliver robust data solutions.
Ensure adherence to best practices for data quality, security, and governance.
Troubleshoot and resolve issues related to data processing and performance tuning.
Required Skills &
Qualifications:
Proficiency in Spark Core, Spark SQL, and Spark Streaming.
Strong programming skills in Scala (preferred), or Python/Java.
Experience with distributed computing and parallel processing concepts.
Familiarity with the Hadoop ecosystem, Hive, or related technologies.
Solid understanding of data structures, algorithms, and performance optimization.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×