×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Senior Software Engineer- Source Analytics

Job in Bellevue, King County, Washington, 98009, USA
Listing for: Snowflake
Full Time position
Listed on 2025-12-05
Job specializations:
  • Software Development
    Data Engineer
Job Description & How to Apply Below
Position: Senior Software Engineer- Open Source Analytics

Senior Software Engineer
- Open Source Analytics

Join to apply for the Senior Software Engineer
- Open Source Analytics
role at Snowflake
.

Snowflake is about empowering enterprises to achieve their full potential — and people too. With a culture that’s all in on impact, innovation, and collaboration, Snowflake is the sweet spot for building big, moving fast, and taking technology — and careers — to the next level.

Snowflake’s vision is to enable every organization to be data-driven. We’re at the forefront of innovation, helping customers realize the full potential of their data with our AI data cloud. We are now going far beyond the traditional data warehouse and helping customers unlock the power of the open data lakehouse architecture with significant investment in Open Source Analytics! Snowflake engineers are leading the way with innovations directly in OSS projects like Apache Iceberg, Apache Polaris (incubating), Apache Parquet and more!

As a Senior Software Engineer on the Open Source Analytics team, you’ll play a key role in building and evolving our open and interoperable data lake ecosystem. You’ll work on some of the most complex and exciting challenges in the enterprise data lake analytics area, all while collaborating closely with some of the best minds in the open source community! You will have a direct impact towards Snowflake’s mission of providing a truly open data lake architecture, free from vendor lock-in.

AS A SENIOR SOFTWARE ENGINEER ON THE OPEN SOURCE ANALYTICS TEAM, YOU WILL:

  • Pioneer new and innovative technical capabilities in the Open Source Analytics community. You will define and build next-generation capabilities on top of critical lakehouse building blocks like interoperable table formats, data catalogs, file formats, and query engines.
  • Design and implement features and enhancements for Apache Iceberg and Apache Polaris focusing on scalability, performance and usability such as Iceberg DML/DDL transactions, schema evolution, partitioning, time travel, and more.
  • Collaborate with the Open source community by contributing code, participating in discussions and reviewing pull requests to ensure high quality contributions.
  • Architect and build systems that integrate open source technologies seamlessly with Snowflake - enabling our customers to build and deploy massive data lake architectures across platforms and across cloud providers.
  • Collaborate with Snowflake’s open-source team and the Apache Iceberg community to contribute new features and enhance the Iceberg table format and REST specification.
  • Work on core data access control and governance features for Apache Polaris
  • Contribute to our managed Polaris service, Snowflake Open Catalog, enabling customers to seamlessly manage and expand their data lake through Snowflake as well as other external query engines like Spark and Trino.
  • Build tooling and services that automate data lake table maintenance, including compaction, clustering, and data retention for enhanced query performance and efficiency.

OUR IDEAL SENIOR SOFTWARE ENGINEER WILL HAVE:

  • 5+ years of experience designing and building scalable, distributed systems.
  • Strong programming skills in Java, Scala, or C++ with an emphasis on performance and reliability.
  • Deep understanding of distributed transaction processing, concurrency control, and high-performance query engines.
  • Experience with open-source data lake formats (e.g., Apache Iceberg, Parquet, Delta) and the challenges associated with multi-engine interoperability.
  • Experience building cloud-native services and working with public cloud providers like AWS, Azure, or GCP.
  • A passion for open-source software and community engagement, particularly in the data ecosystem.
  • Familiarity with data governance, security, and access control models in distributed data systems.

BONUS POINTS FOR EXPERIENCE WITH:

  • Contributing to open-source projects, especially in the data infrastructure space.
  • Designing or implementing REST APIs, particularly in the context of distributed systems.
  • Managing large-scale data lakes or data catalogs in production environments.
  • Working on highly-performant and scalable query engines such as Spark, Flink, or Trino.

WHY…

Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary