Sr. Data Engineer
Listed on 2026-01-02
-
IT/Tech
Data Engineer, Cloud Computing
Steer Bridge Strategies is a CVE-Verified Service-Disabled, Veteran-Owned Small Business (SDVOSB) delivering a broad spectrum of professional services to the U.S. Government and private sector. Backed by decades of hands-on experience in federal acquisition and procurement, we provide agile, best-in-class commercial solutions that drive mission success.
Our strength lies in our people—especially the veterans whose leadership, discipline, and dedication shape everything we do. At Steer Bridge, we don’t just hire talent—we empower it, creating meaningful career paths for those who have served and those who share our commitment to excellence.
RoleSteer Bridge seeks a highly skilled and motivated individual to join our team as a Senior Data Engineer to align data solutions to business requirements by planning and managing data infrastructure and strategy for our F-35 AI/ML Maintenance, Sustainment, and Deployment Planning Project. As the most advanced fighter jet in the world, the F-35 strengthens national security, enhances global partnerships and powers economic growth.
Our F-35 Project is at the forefront of applying advanced computational analytics to revolutionize supply chain management in the aerospace industry. Our team is dedicated to harnessing the power of AI/ML to increase parts availability and reduce maintenance wait times, ultimately maximizing aircraft availability. In collaboration with the National Center for Manufacturing Sciences (NCMS), we are on a mission to deliver exceptional solutions that will redefine operational readiness for the F-35 program and beyond.
- In this role, you will be responsible performing Data Engineering tasks within the existing systems of record with multiple databases. Your mission will be to enhance and optimize data entry, management and extraction within this database to ensure its usability within our proprietary system. Data management activities include performing data quality checks, analysis, presenting data and documenting the process. The ideal candidate is a quick learner, curious, innovative, results-oriented and has strong interpersonal skills
- Health insurance
- Dental insurance
- Vision insurance
- Life Insurance
- 401(k) Retirement Plan with matching
- Paid Time Off
- Paid Federal Holidays
- Must be a U.S. Citizen.
- Bachelor’s Degree or Above in Systems Engineering, Computer Science or related field.
- An active security clearance or the ability to obtain one is required.
- Minimum 7+ years of experience to include:
- Experience in data pipelines, utilizing advanced analytics tools and platforms and Python.
- Experience in scripting, tooling, and automating large-scale computing environments.
- Extensive experience with major tools such as Python, Pandas, PySpark, Num Py, Sci Py, SQL, and Git;
Minor experience with Tensor Flow, PyTorch, and Scikit-learn.
- Advanced data modeling (conceptual, logical, and physical) with emphasis on scalability and maintainability.
- Strong understanding of database paradigms (relational, No
SQL, graph, time-series, and document-based). - Expertise with modern data warehousing platforms (Redshift, Snowflake, Big Query).
- Deep understanding of dimensional modeling (star/snowflake schemas) and data vault techniques.
- Experience designing for both OLTP and OLAP workloads.
- Proficiency with schema evolution, metadata-driven pipelines, and data versioning strategies.
- Implementing data retention, archival, and lifecycle policies.
- Project
Experience:
Delivered optimized, production-grade data models supporting analytics, reporting, and ML workflows, aligning with established architecture and performance standards.
- Hands-on experience with distributed processing tools (Apache Kafka, Airflow, Spark, Flink, NiFi).
- Skilled in building and orchestrating batch and real-time pipelines on cloud platforms (AWS Glue, GCP Dataflow, Azure Data Factory).
- Deep understanding of incremental processing, idempotency, schema evolution, and backfill logic.
- Proficient in pipeline automation, observability, and monitoring (metrics, logging, alerting).
- Strong Python development for ETL — modular, testable, reusable,…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).