Senior Data Engineer, Operations
Listed on 2026-02-16
-
IT/Tech
Data Engineer, Data Analyst
Join to apply for the Senior Data Engineer, Operations (USA) role at Gigster
3 days ago Be among the first 25 applicants
Virtasant is a global technology services company with a network of over 4,000 technology professionals across 130+ countries. We specialize in cloud architecture, infrastructure, migration, and optimization, helping enterprises scale efficiently while maintaining cost control.
Our clients range from Fortune 500 companies to fast-growing startups, relying on us to build high-performance infrastructure, optimize cloud environments, and enable continuous delivery at scale.
About the RoleWe're seeking a Senior Data Engineer - Operations to support our data platform, with a strong focus on triaging, debugging, and operating production data pipelines. This role sits within the Data Platform Operations pillar and is responsible for the day-to-day health, reliability, and correctness of ingestion pipelines, transformations, and analytics workflows.
You’ll work hands‑on across ingestion, orchestration, dbt transformations, and medallion‑layer data models, partnering closely with other data and analytic engineers and Dev Ops to ensure timely resolution of data issues and smooth platform operations.
What You’ll Do- Build and maintain automation, scripts, and lightweight tooling to support operational workflows, including pipeline triage, data validation, backfills, reprocessing, and quality checks. Improve self‑service and reduce manual operational toil.
- Own operational support for ingestion and transformation pipelines built on Airflow, Spark, dbt, Kafka, Snowflake (or similar). Triaging failed jobs, diagnosing data issues, performing backfills, and coordinating fixes across ingestion, transformation, and analytics layers.
- Monitor pipeline health, data freshness, and quality metrics across medallion layers. Investigate data anomalies, schema drift, and transformation failures, and drive incidents to resolution through root‑cause analysis and corrective actions.
- Act as the primary interface between Data Platform, Analytics Engineering, and downstream consumers during operational issues. Communicate impact, coordinate fixes, and ensure timely resolution of data incidents.
- Must live in the contiguous United States and have all necessary documentation to work under an independent contractor agreement. Cannot offer sponsor ships or sponsorship transfers; H1B, OPT, EAD, or CPT visas are not considered.
- 7+ years of experience in data engineering, analytics engineering, or software development, with significant experience operating and supporting production data pipelines.
- Strong programming skills in Python & SQL on at least one major data platform (Snowflake, Big Query, Redshift, or similar).
- Experience supporting schema evolution, data contracts, and downstream consumers in production environments.
- Strong experience triaging, debugging, and maintaining dbt models, including understanding dependencies across medallion layers (bronze/silver/gold).
- Experience with streaming, distributed compute, or S3‑based table formats (Spark, Kafka, Iceberg/Delta/Hudi).
- Experience with schema governance, metadata systems, and data quality frameworks.
- Hands‑on experience operating and debugging orchestration workflows (Airflow, Dagster, Prefect), including retries, backfills, and dependency management.
- Solid grasp of CI/CD, Docker, and at least 2 years of experience in AWS.
- Experience participating in on‑call rotations, incident response, or data operations teams.
- Experience with data observability, data catalog, or metadata management tools.
- Experience working with healthcare data (X12, FHIR).
- Understanding of authentication/authorization (OAuth2, JWT, SSO).
This is a very fast‑paced, high‑pressure role, where you can learn a lot. If this makes your eyes spark, then please apply! If you can build tenure in this role, the potential is endless. You'll be an SME in many key areas for our client, and you'll work with Analytics Engineers and with downstream reporting tools.
Our Recruitment Process- Technical Interview (45 min)
- Screening interview with the client's hiring manager (30 min)
- Client technical interview (45 min)
- We strive to move efficiently from step to step so that the recruitment process can be as fast as possible.
- Totally remote within the contiguous United States, full‑time (40h/week)
- Stable, long‑term independent contract agreement
- Work hours - US Eastern time office hours
Referrals increase your chances of interviewing at Gigster by 2x
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).