Innovation and Automation Specialist
Listed on 2025-12-23
-
IT/Tech
Data Engineer, Cloud Computing
BT-162 - Innovation and Automation Specialist
Skill Level: Mid
Location:
Chantilly/Herndon
Role
Description:
As a Data Engineer Specialist on the Innovation and Automation team, you will serve as a subject matter expert, blending deep data engineering expertise with a passion for automation. You will not build individual data pipelines for business users; instead, you will build the factory that produces them. Your mission is to design, develop, and implement the reusable frameworks, automated patterns, and core tooling that our data engineering teams will use to build their own pipelines faster, more reliably, and more consistently.
This is a highly technical, hands-on role for a problem-solver who wants to act as a force multiplier for the entire data organization.
Responsibilities:
- Act as a technical expert on the design and implementation of automated data engineering solutions.
- Develop and maintain a library of standardized, reusable ETL/ELT pipeline templates using Python, SQL, and frameworks like Databricks or Snowflake.
- Engineer and implement robust, automated data quality and testing frameworks (e.g., using tools like Great Expectations) that are embedded within the core pipeline templates.
- Contribute to the development of Infrastructure-as-Code (IaC) modules (Terraform) for the automated provisioning of data infrastructure.
- Enhance and optimize the CI/CD for Data (Data Ops) pipelines, ensuring seamless and reliable deployment of data workflows.
- Serve as an escalation point for the most complex data engineering and automation challenges, providing expert-level troubleshooting and guidance to other engineers.
- Mentor other data engineers on automation best practices, code standards, and the use of the frameworks you build.
- Research and prototype cutting-edge data engineering and automation technologies to drive continuous improvement.
Required Qualifications:
- 5+ years of hands-on experience in data engineering.
- Expert-level programming skills in Python and advanced SQL.
- Proven, in-depth experience building and optimizing data pipelines in a cloud environment (AWS, Azure) on platforms like Databricks or Snowflake.
- Strong, hands-on experience with Infrastructure-as-Code (IaC) using Terraform.
- Demonstrable experience with CI/CD principles and tools (e.g., Git Lab CI, Jenkins, Git Hub Actions) applied to data workflows.
- Deep understanding of modern data architecture, data modeling, and software engineering best practices.
Preferred Qualifications:
- Experience in a Dev Ops or Site Reliability Engineering (SRE) role.
- Direct experience developing and operationalizing a "pipeline factory" or similar framework.
- Familiarity with data orchestration tools (e.g., Airflow) and containerization (Docker, Kubernetes).
- Proven ability to diagnose and resolve complex performance, data quality, and system-level issues.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).