×
Register Here to Apply for Jobs or Post Jobs. X

Senior Backend Software Engineer

Job in Snowflake, Navajo County, Arizona, 85937, USA
Listing for: Outsiders Fund
Full Time position
Listed on 2025-12-03
Job specializations:
  • Software Development
    Data Engineer, Machine Learning/ ML Engineer
Job Description & How to Apply Below
Location: Snowflake

About Shelf

There is no AI Strategy without a Data Strategy. Getting GenAI to work is mission-critical for most companies, but 90% of AI projects haven't deployed. Why? Poor data quality - it is the #1 obstacle companies have in getting GenAI projects into production.

We've helped some of the best brands like Amazon, Mayo Clinic, AmFam, and Nespresso solve their data issues and deploy their AI strategy with Day 1 ROI.

Simply put, Shelf unlocks AI readiness. We provide the core infrastructure that enables GenAI to be deployed  help companies deliver more accurate GenAI answers by eliminating bad data in documents and files before they go into an LLM and create bad answers.

Shelf is partnered with Microsoft, Salesforce, Snowflake, Databricks, OpenAI and other big tech players who are bringing GenAI to the enterprise.

Our mission is to empower humanity with better answers everywhere.

Job Description:

As a Backend Engineer at Shelf, you will focus on building robust backend services for large-scale data processing. We use Python (and Node.js) to create data pipelines and handle data from diverse storage solutions. Your work will center on ensuring data flows efficiently, remains well orchestrated, and can operate seamlessly ’ll be tackling complex data ingestion, transformation, and orchestration challenges, building the core infrastructure that powers our platform.

But we're not just moving data; we're focused on solving the crucial data quality problems that underpin successful AI initiatives. Shelf is uniquely positioned to address these challenges head-on, as we provide data quality solutions and data enrichment capabilities that are key to building accurate and trustworthy AI systems. We're not simply building a platform, we're building the very foundation for the next generation of AI.

This means your will directly impact the accuracy, reliability, and ultimately, the usefulness of AI across the enterprise landscape.

  • Do you enjoy crafting efficient, testable code and want to be part of the engine behind advanced data processing?
  • Do you have a passion for building truly robust and accurate systems?
  • Are you looking for fast professional growth in a very demanding and challenging environment?

If you can answer these three questions confidently with “Yes!”, then this might just be the role for you: a unique opportunity to build products that have a huge impact on real-world AI applications.

Responsibilities
  • Design, implement, and optimize our distributed ETL pipeline, focusing on background processing logic, data transformation, and scalability.
  • Develop modular and composable components capable of efficiently processing large-scale data across a diverse range of storage solutions, including S3, RDS/Postgre

    SQL, Elasticsearch, Dynamo

    DB, data warehouses, and data lakes.
  • Implement ML model integrations within the data pipeline, working closely with Data Scientists on model deployment and data flow.
  • Develop clean, maintainable code in Python (and occasionally Node.js), adhering to best practices in observability, cost-efficiency, and robust error handling.
  • Proactively identify and address performance bottlenecks and inefficiencies in current systems, proposing solutions to improve scalability and reliability, while ensuring continuous production stability through thorough testing and monitoring practices.
  • Share your knowledge, participate in code reviews, and advocate for best practices to advance our backend development standards.
Requirements
  • 4+ years of experience in software development, with hands-on experience in Node.js and Python.
  • Deep understanding of distributed systems, concurrency patterns, and ETL-oriented workflows.
  • Comfortable working with diverse data stores (SQL and No

    SQL), including schema design and performance tuning erience with cloud-based data lakes and data warehouses is a plus.
  • Experience with event-driven architectures, distributed processing techniques, CQRS.
  • Proven experience building scalable backend applications on either AWS or Azure, including a strong understanding of their respective services for compute, storage, and data processing.
  • Ability to write well-structured, testable code with…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary