×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer

Job in Snowflake, Navajo County, Arizona, 85937, USA
Listing for: Carrot Fertility, Inc.
Full Time position
Listed on 2026-01-01
Job specializations:
  • IT/Tech
    Data Engineer, Data Security
Job Description & How to Apply Below

Carrot is a global, comprehensive fertility and family care platform, supporting members and their families through many of life's most memorable moments. Trusted by many of the world’s leading multinational employers, health plans, and health systems, Carrot’s proven clinical program delivers exceptional outcomes and experiences for members and industry‑leading cost‑savings for employers. Its award‑winning products serve all populations, from preconception care through pregnancy, IVF, male factor infertility, adoption, gestational carrier care, and menopause.

Carrot offers localized support in over 170 countries and 25 languages. With a comprehensive program that prioritizes clinical excellence and human‑centered care, Carrot supports members and their families through many of the most meaningful moments of their lives. Learn more at  .

Role overview

The Data Engineer helps evolve the reliability and scalability of Carrot’s modern data platform across analytics, reporting, and business intelligence. They build and maintain robust, automated data pipelines and partner with cross‑functional stakeholders to deliver secure, compliant, high‑quality data solutions.

This role is hands‑on across the data lifecycle—contributing to resilient ETL/ELT pipelines, real‑time and batch integrations, and workflow orchestration—while steadily improving automation, cost efficiency, and documentation quality.

You will improve the reliability and scalability of Carrot’s data platform, reduce manual operational risk through automation, and enable stakeholders with secure, high‑quality data—while growing your scope toward deeper ownership and technical leadership over time.

Key responsibilities
  • Data pipeline development and operations:
    Build, test, deploy, monitor, and iterate on scalable ETL/ELT pipelines using platforms such as dbt and Fivetran; contribute to standard workflows (e.g., member engagement, finance/billing reports) and support ad hoc reporting needs.
  • Warehouse and lake stewardship:
    Contribute to the design and continuous improvement of Snowflake and related cloud data platforms (AWS/GCP), including source integrations, performance tuning, and cost optimizations.
  • Workflow orchestration and integrations:
    Develop auditable orchestration flows with tools like Prefect or Airflow; integrate with services such as S3, SFTP, APIs, and  Next‑Generation MFT, SFTP & File Sharing ; implement robust alerting, retries, and monitoring.
  • Infrastructure optimization:
    Identify opportunities to improve pipeline speed, reliability, and cost; help refactor legacy processes, reduce manual steps, and document changes for scale and reuse.
  • Partner‑ and stakeholder‑facing deliverables:
    Support automated data integrations and exports (e.g., eligibility feeds, payroll/tax/billing files) with finance, product, legal, and commercial teams; help enforce data specs and secure handling of sensitive information.
  • Change management and auditability:
    Follow best practices for Git/Git Hub, code reviews, testing, Jira workflows, audit‑trail documentation, and exception logging; uphold Info Sec and compliance standards across the data lifecycle.
  • Analytics enablement:
    Help design modular, well‑documented data models and marts (dbt/Snowflake) that enable BI and data science use cases (e.g., segmentation, provider matching, engagement analytics).
  • Incident participation and support:
    Participate in on‑call or escalation routines for high‑impact data incidents; assist with root‑cause analysis, remediation, and clear stakeholder communications.
  • Process automation:
    Codify business rules and automate recurring cycles (e.g., monthly finance handoffs, audit logs) to reduce manual intervention and operational risk.
  • Data security and governance:
    Implement privacy‑conscious practices such as masking and de‑identification; contribute to safe, compliant access across the stack and external data exchanges.
  • Proficiency with Snowflake, dbt, and Python for data modeling, transformations, and pipeline development in production environments.
  • Strong SQL skills for complex querying and performance optimization across large datasets.
  • Experience building and maintaining automated…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary