×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Data Engineer III

Job in Bozeman, Gallatin County, Montana, 59772, USA
Listing for: onX
Full Time position
Listed on 2026-01-01
Job specializations:
  • IT/Tech
    Data Engineer
Job Description & How to Apply Below

As a pioneer in digital outdoor navigation with a suite of apps, onX was founded in Montana, which in turn has inspired our mission to awaken the adventurer inside everyone. With more than 400 employees located around the country working in largely remote / hybrid roles, we have created regional “Basecamps” to help remote employees find connection and inspiration with other onXers.

We bring our outdoor passion to work every day, coupling it with industry‑leading technology to craft dynamic outdoor experiences.

Through multiple years of growth, we haven’t lost our entrepreneurial ethos  offer a fast‑paced, growing, tech‑forward environment where ownership, accountability, and passion for winning as a team are essential. We value diversity and believe it leads to different perspectives and inspires both new adventures and new growth. As a team, we’re hungry to improve, value innovation, and believe great ideas come from any direction.

Important Alert :
Please note, on Xmaps will never ask for credit card or SSN details during the initial application process. For your digital safety, apply only through our legitimate website at  or directly via our Linked In page.

ABOUT THIS ROLE

onX is building the next‑generation data foundation that fuels our growth. As a Data Engineer, you’ll design, build, and scale the lakehouse architecture that underpins analytics, machine learning, and AI ’ll work across teams to modernize our data ecosystem, making it discoverable, reliable, governed, and ready for self‑service and intelligent automation. This role is intentionally broad in scope. We’re seeking engineers who can operate anywhere along the data lifecycle from ingestion and transformation to metadata, orchestration, and MLOps.

Depending on experience, you may focus on foundational architecture, scaling reusable services, or embedding governance, semantic alignment, and observability patterns into the platform.

As an onX Data Engineer, your day to day responsibilities would look like:
Architecture and Design
  • Design, implement, and evolve onX’s Iceberg‑based lakehouse architecture to balance scalability, cost, and performance.
  • Establish data layer standards (Raw, Curated, Certified) that drive consistency, traceability, and reusability across domains.
  • Define and implement metadata first and semantic layer architectures that make data understandable, trusted, and ready for self‑service analytics.
  • Partner with BI and business stakeholders to ensure domain models and certified metrics are clearly defined and aligned to business language.
  • Build and maintain scalable, reliable ingestion and transformation pipelines using GCP tools (Spark, Dataflow, Pub/Sub, Big Query, Dataplex, Cloud Composer).
  • Develop batch and streaming frameworks with schema enforcement, partitioning, and lineage capture.
  • Use configuration‑driven, reusable frameworks to scale ingestion, curation, and publishing across domains.
  • Apply data quality checks and contracts at every layer to ensure consistency, auditability, and trust.
MLOps and Advanced Workflows
  • Collaborate with Data Science to integrate feature stores, model registries, and model monitoring into the platform.
  • Build and maintain standardized orchestration and observability patterns for both data and ML pipelines, ensuring SLA, latency, and cost visibility.
  • Develop reusable microservices that support model training, deployment, and scoring within a governed, observable MLOps framework.
  • Implement self‑healing patterns to minimize MTTR and ensure production reliability.
Governance, Metadata, and Self‑Service Enablement
  • Automate governance via metadata‑driven access controls (row/column permissions, sensitivity tagging, lineage tracking).
  • Define and maintain the semantic layer that bridges the technical data platform and business self‑service, enabling analysts and AI systems to explore data confidently.
  • Use GCP Dataplex as the unifying layer for data discovery, lineage, and access management, serving as the first step in evolving our metadata fabric toward a fully connected semantic graph.
  • Extend metadata models so datasets, pipelines, and models become interconnected, explainable, and…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary