Sr. Data Architect
Listed on 2026-02-19
-
IT/Tech
Data Engineer, Cloud Computing
Steer Bridge Strategies is a modern technology company delivering innovative, mission‑focused solutions to the U.S. Government and private sector. Leveraging deep expertise in federal acquisition, digital transformation, and emerging technologies, we deliver agile, commercial‑grade capabilities that accelerate operational effectiveness and drive measurable mission success. At the core of Steer Bridge is our people—especially the veterans whose leadership, problem‑solving mindset, and commitment to excellence elevate every project we support.
We don’t simply hire exceptional talent;
we cultivate it
, creating meaningful career pathways for veterans, military spouses, and professionals who share our passion for advancing technology and strengthening the missions we serve.
We are seeking a highly skilled Sr. Data Architect to support operations and sustainment of the F-35 and C-130 aircraft. This role involves designing, implementing, and managing data systems that support aircraft maintenance, logistics, performance analysis, and mission readiness. The ideal candidate will have experience in aerospace data systems, strong analytical skills, and a deep understanding of data governance in a defense environment.
Benefits- Health insurance
- Dental insurance
- Vision insurance
- Life Insurance
- 401(k) Retirement Plan with matching
- Paid Time Off
- Paid Federal Holidays
- Must be a U.S. Citizen.
- Masters’s Degree or Above in Systems Engineering, Computer Science or related field.
- An active security clearance or the ability to obtain one is required.
- Minimum 10+ years of experience to include:
- Experience in data management, utilizing advanced analytics tools and platforms, and Python.
- Experience with Data Warehousing consulting/engineering or related technologies (Redshift, Databricks, Big Query, OADW, Apache Hive, Apache Lucene).
- Experience in scripting, tooling, and automating large-scale computing environments.
- Extensive experience with major tools such as Python, Pandas, PySpark, Num Py, Sci Py, SQL, and Git;
Minor experience with Tensor Flow, PyTorch, and Scikit-learn.
- Data modeling (conceptual, logical, and physical)
- Database schema design
- Understanding of different database paradigms (relational, No
SQL, graph databases, etc.) - ETL (Extract, Transform, Load) processes and tools
- Experience with modern data warehousing solutions (e.g., Redshift, Snowflake, Big Query)
- Understanding of dimensional modeling (star/snowflake schemas) and data vault techniques.
- Experience designing for both OLTP and OLAP workloads.
- Familiarity with metadata-driven design and schema evolution in data systems.
- Experience defining data SLAs and lifecycle management policies.
- Project
Experience:
Designing and implementing scalable data architectures that support business intelligence, analytics, and machine learning workflows.
Experience:
Migrating legacy data infrastructure to the cloud or developing new data platforms using cloud services, with a focus on cost efficiency and scalability.
Data Pipeline Development
- Proficiency in tools like Apache Kafka, Airflow, Spark, Flink, or Ni Fi
- Experience with cloud-based data services (AWS Glue, Google Cloud Dataflow, Azure Data Factory)
- Real-time and batch data processing
- Automation and monitoring of data pipelines
- Strong understanding of incremental processing, idempotency, and backfill strategies.
- Knowledge of workflow dependency management, retries, and alerting.
- Experience writing modular, testable, and reusable Python-based ETL code.
- Project
Experience:
Leading the development of highly available, fault-tolerant, and scalable data pipelines, integrating multiple data sources, and ensuring data quality.
- Expertise in cloud environments (AWS, GCP, Azure)
- Understanding of cloud-based storage (S3, Blob Storage), databases (RDS, Dynamo
DB), and compute resources - Implementing cloud-native data solutions (Data Lake, Data Warehouse, Data Mesh)
- Experience with cost monitoring and optimization for data workloads.
- Familiarity with hybrid and multi-cloud architectures.
- Understanding of serverless data patterns (e.g., Lambda + S3 + Athena, Cloud Functions + Big Query).
Ex…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).