Senior Spark Scala Developer
Listed on 2026-02-24
-
Software Development
Software Engineer, Cloud Engineer - Software, Senior Developer
Introduction
At IBM Software, we transform client challenges into solutions. Building the world’s leading AI-powered, cloud-native products that shape the future of business and society. Our legacy of innovation creates endless opportunities for IBMers to learn, grow, and make an impact on a global scale. Working in Software means joining a team fueled by curiosity and collaboration. You’ll work with diverse technologies, partners, and industries to design, develop, and deliver solutions that power digital transformation.
With a culture that values innovation, growth, and continuous learning, IBM Software places you at the heart of IBM’s product and technology landscape. Here, you’ll have the tools and opportunities to advance your career while creating software that changes the world.
We are looking for a seasoned Spark Scala Developer with 12+ years of software engineering experience, including 5+ years in building and optimizing large-scale data processing solutions using Apache Spark and Scala. The ideal candidate will have strong expertise in distributed computing, data pipelines, and both real-time and batch processing architectures.
Your role and responsibilitiesKey Responsibilities:
- Design and optimize big data applications using Apache Spark and Scala.
- Tune Spark jobs for performance and cost efficiency on distributed clusters.
- Maintain reusable libraries and ensure best coding practices.
- Work with storage systems such as HDFS, Hive, HBase, Cassandra, Kafka, and Parquet.
- Mentor junior developers and lead code reviews.
- Ensure compliance with security and governance standards.
- Troubleshoot and resolve performance issues in big data solutions.
Required Qualifications:
- Bachelor’s or Master’s in Computer Science or related field.
- 12+ years of software development experience.
- 5+ years of hands‑on experience with Apache Spark and Scala.
- Strong knowledge of distributed computing and cluster frameworks.
- Proficiency in Scala and functional programming principles.
- Expertise in Spark tuning, partitions, joins, and optimization techniques.
- Experience with cloud platforms (AWS, Azure, GCP) and tools like EMR, Databricks, HDInsight.
- Familiarity with Kafka, Hive, HBase, No
SQL databases, and data lake architectures. - Knowledge of CI/CD, Git, Jenkins, and automated testing.
- Strong problem‑solving and collaboration skills.
Preferred:
- Experience with Databricks, Delta Lake, or Apache Iceberg.
- Exposure to machine learning pipelines using Spark MLlib or integration with ML frameworks.
- Open‑source contributions in big data projects.
- Excellent communication and leadership abilities.
IBM is committed to creating a diverse environment and is proud to be an equal‑opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).