×
Register Here to Apply for Jobs or Post Jobs. X

Sr. Data Engineer

Job in Vienna, Fairfax County, Virginia, 22184, USA
Listing for: SteerBridge
Full Time position
Listed on 2026-01-02
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 116000 - 126000 USD Yearly USD 116000.00 126000.00 YEAR
Job Description & How to Apply Below

Steer Bridge Strategies is a CVE-Verified Service-Disabled, Veteran-Owned Small Business (SDVOSB) delivering a broad spectrum of professional services to the U.S. Government and private sector. Backed by decades of hands-on experience in federal acquisition and procurement, we provide agile, best-in-class commercial solutions that drive mission success.

Our strength lies in our people—especially the veterans whose leadership, discipline, and dedication shape everything we do. At Steer Bridge, we don’t just hire talent—we empower it, creating meaningful career paths for those who have served and those who share our commitment to excellence.

Role

Steer Bridge seeks a highly skilled and motivated individual to join our team as a Senior Data Engineer to align data solutions to business requirements by planning and managing data infrastructure and strategy for our F-35 AI/ML Maintenance, Sustainment, and Deployment Planning Project. As the most advanced fighter jet in the world, the F-35 strengthens national security, enhances global partnerships and powers economic growth.

Our F-35 Project is at the forefront of applying advanced computational analytics to revolutionize supply chain management in the aerospace industry. Our team is dedicated to harnessing the power of AI/ML to increase parts availability and reduce maintenance wait times, ultimately maximizing aircraft availability. In collaboration with the National Center for Manufacturing Sciences (NCMS), we are on a mission to deliver exceptional solutions that will redefine operational readiness for the F-35 program and beyond.

Responsibilities
  • In this role, you will be responsible performing Data Engineering tasks within the existing systems of record with multiple databases. Your mission will be to enhance and optimize data entry, management and extraction within this database to ensure its usability within our proprietary system. Data management activities include performing data quality checks, analysis, presenting data and documenting the process. The ideal candidate is a quick learner, curious, innovative, results-oriented and has strong interpersonal skills
Benefits
  • Health insurance
  • Dental insurance
  • Vision insurance
  • Life Insurance
  • 401(k) Retirement Plan with matching
  • Paid Time Off
  • Paid Federal Holidays
Required
  • Must be a U.S. Citizen.
  • Bachelor’s Degree or Above in Systems Engineering, Computer Science or related field.
  • An active security clearance or the ability to obtain one is required.
  • Minimum 7+ years of experience to include:
  • Experience in data pipelines, utilizing advanced analytics tools and platforms and Python.
  • Experience in scripting, tooling, and automating large-scale computing environments.
  • Extensive experience with major tools such as Python, Pandas, PySpark, Num Py, Sci Py, SQL, and Git;
    Minor experience with Tensor Flow, PyTorch, and Scikit-learn.
Data Modeling and Design
  • Advanced data modeling (conceptual, logical, and physical) with emphasis on scalability and maintainability.
  • Strong understanding of database paradigms (relational, No

    SQL, graph, time-series, and document-based).
  • Expertise with modern data warehousing platforms (Redshift, Snowflake, Big Query).
  • Deep understanding of dimensional modeling (star/snowflake schemas) and data vault techniques.
  • Experience designing for both OLTP and OLAP workloads.
  • Proficiency with schema evolution, metadata-driven pipelines, and data versioning strategies.
  • Implementing data retention, archival, and lifecycle policies.
  • Project

    Experience:

    Delivered optimized, production-grade data models supporting analytics, reporting, and ML workflows, aligning with established architecture and performance standards.
Data Pipeline Development
  • Hands-on experience with distributed processing tools (Apache Kafka, Airflow, Spark, Flink, NiFi).
  • Skilled in building and orchestrating batch and real-time pipelines on cloud platforms (AWS Glue, GCP Dataflow, Azure Data Factory).
  • Deep understanding of incremental processing, idempotency, schema evolution, and backfill logic.
  • Proficient in pipeline automation, observability, and monitoring (metrics, logging, alerting).
  • Strong Python development for ETL — modular, testable, reusable,…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary