×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Data Engineer; Core Platform & Optimization

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Hilbert's AI
Full Time position
Listed on 2026-03-02
Job specializations:
  • IT/Tech
    Data Engineer
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below
Position: Data Engineer (Core Platform & Optimization)

Hilbert is a scalable, data science-first growth engine that gives B2C teams predictive clarity into user behavior, revenue drivers, and the actions that drive sustainable growth. Fully agentic by design, Hilbert shrinks months-long decision cycles to minutes.

From Fortune 10 enterprises to beloved brands like Fresh Direct, Blank Street, and Levain Bakery, operators run their growth on Hilbert. We're also co-building alongside leading AI companies.

We’re looking for a Core Data Engineer to build the foundation that allows our integration team to move 10x faster. While others focus on connecting the pipes, you focus on the integrity and flow of the entire refinery. You will be responsible for our internal "shared brain", the common code, the optimization of our Click House clusters, and the monitoring systems that tell us a pipeline failed before the customer even notices.

The Role
  • The "Optimizer": You enjoy shaving seconds off a query and megabytes off a memory footprint. You think in terms of scale and reliability.

  • Dagster Ninja: You have experience (or a deep desire to master)
    Dagster for complex orchestration.

  • OLAP Expert: You understand the internals of Click House (or similar like Druid/Pinot) and how to structure data for maximum analytical performance.

  • Software Rigor: You treat data code like production software. CI/CD, unit testing for data, and modular code aren't optional for you.

  • Overlap: You can provide at least 5 hours of overlap with the global team.

Who Thrives in This Role
  • Optimize our Click House performance.

  • Create configuration schemas that allow data ingestion pipelines to be controlled rather than custom built.

  • Build and maintain the shared Python libraries used by the entire engineering team for data syncs.

  • Enhance our canonical data models to ensure they stay "generic" enough to scale but powerful enough to perform.

  • Architect our monitoring, alerting, and observability stack so we have "five-nines" confidence in our data.

  • Design new integration pipelines by investigating new data sources and different data sync mechanisms.

Bonus Points
  • Experience in E-commerce or Retail sectors (understanding what a "SKU" or "Attribution Window" is without being told).

  • Experience with product event usage data.

  • Contribution to open-source data tools.

  • Working with Data Scientists or ML Engineers.

  • Experience of building a data orchestration and integration engine/tool from scratch.

Location

San Francisco, or Istanbul. At least 5 hours overlap with PST timezone (7am-5pm).

Compensation

Competitive salary + equity package, commensurate with experience.

Performance-based bonuses tied to project milestones and customer impact.

The Hiring Journey

Short form → Intro call → Technical working session → Team conversations → Offer

Fast, human, no bureaucracy.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary