×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer, Operations

Job in Austin, Travis County, Texas, 78716, USA
Listing for: Gigster
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Position: Senior Data Engineer, Operations (USA)

Join to apply for the Senior Data Engineer, Operations (USA) role at Gigster

3 days ago Be among the first 25 applicants

Virtasant is a global technology services company with a network of over 4,000 technology professionals across 130+ countries. We specialize in cloud architecture, infrastructure, migration, and optimization, helping enterprises scale efficiently while maintaining cost control.

Our clients range from Fortune 500 companies to fast-growing startups, relying on us to build high-performance infrastructure, optimize cloud environments, and enable continuous delivery at scale.

About the Role

We're seeking a Senior Data Engineer - Operations to support our data platform, with a strong focus on triaging, debugging, and operating production data pipelines. This role sits within the Data Platform Operations pillar and is responsible for the day-to-day health, reliability, and correctness of ingestion pipelines, transformations, and analytics workflows.

You’ll work hands‑on across ingestion, orchestration, dbt transformations, and medallion‑layer data models, partnering closely with other data and analytic engineers and Dev Ops to ensure timely resolution of data issues and smooth platform operations.

What You’ll Do
  • Build and maintain automation, scripts, and lightweight tooling to support operational workflows, including pipeline triage, data validation, backfills, reprocessing, and quality checks. Improve self‑service and reduce manual operational toil.
  • Own operational support for ingestion and transformation pipelines built on Airflow, Spark, dbt, Kafka, Snowflake (or similar). Triaging failed jobs, diagnosing data issues, performing backfills, and coordinating fixes across ingestion, transformation, and analytics layers.
  • Monitor pipeline health, data freshness, and quality metrics across medallion layers. Investigate data anomalies, schema drift, and transformation failures, and drive incidents to resolution through root‑cause analysis and corrective actions.
  • Act as the primary interface between Data Platform, Analytics Engineering, and downstream consumers during operational issues. Communicate impact, coordinate fixes, and ensure timely resolution of data incidents.
What We’re Looking For
  • Must live in the contiguous United States and have all necessary documentation to work under an independent contractor agreement. Cannot offer sponsor ships or sponsorship transfers; H1B, OPT, EAD, or CPT visas are not considered.
  • 7+ years of experience in data engineering, analytics engineering, or software development, with significant experience operating and supporting production data pipelines.
  • Strong programming skills in Python & SQL on at least one major data platform (Snowflake, Big Query, Redshift, or similar).
  • Experience supporting schema evolution, data contracts, and downstream consumers in production environments.
  • Strong experience triaging, debugging, and maintaining dbt models, including understanding dependencies across medallion layers (bronze/silver/gold).
  • Experience with streaming, distributed compute, or S3‑based table formats (Spark, Kafka, Iceberg/Delta/Hudi).
  • Experience with schema governance, metadata systems, and data quality frameworks.
  • Hands‑on experience operating and debugging orchestration workflows (Airflow, Dagster, Prefect), including retries, backfills, and dependency management.
  • Solid grasp of CI/CD, Docker, and at least 2 years of experience in AWS.
Preferred / Nice‑to‑haves
  • Experience participating in on‑call rotations, incident response, or data operations teams.
  • Experience with data observability, data catalog, or metadata management tools.
  • Experience working with healthcare data (X12, FHIR).
  • Understanding of authentication/authorization (OAuth2, JWT, SSO).
Why This Role is Exciting

This is a very fast‑paced, high‑pressure role, where you can learn a lot. If this makes your eyes spark, then please apply! If you can build tenure in this role, the potential is endless. You'll be an SME in many key areas for our client, and you'll work with Analytics Engineers and with downstream reporting tools.

Our Recruitment Process
  • Technical Interview (45 min)
  • Screening interview with the client's hiring manager (30 min)
  • Client technical interview (45 min)
  • We strive to move efficiently from step to step so that the recruitment process can be as fast as possible.
What We Offer
  • Totally remote within the contiguous United States, full‑time (40h/week)
  • Stable, long‑term independent contract agreement
  • Work hours - US Eastern time office hours

Referrals increase your chances of interviewing at Gigster by 2x

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary