Sr. Data Architect
Listed on 2026-01-02
-
IT/Tech
Data Engineer, Cloud Computing
Steer Bridge Strategies is a CVE-Verified Service-Disabled, Veteran-Owned Small Business (SDVOSB) delivering a broad spectrum of professional services to the U.S. Government and private sector. Backed by decades of hands-on experience in federal acquisition and procurement, we provide agile, best-in-class commercial solutions that drive mission success.
Our strength lies in our people—especially the veterans whose leadership, discipline, and dedication shape everything we do. At Steer Bridge, we don’t just hire talent—we empower it, creating meaningful career paths for those who have served and those who share our commitment to excellence.
OverviewWe are seeking a highly skilled Sr. Data Architect to support operations and sustainment of the F-35 and C-130 aircraft. This role involves designing, implementing, and managing data systems that support aircraft maintenance, logistics, performance analysis, and mission readiness. The ideal candidate will have experience in aerospace data systems, strong analytical skills, and a deep understanding of data governance in a defense environment.
Benefits- Health insurance
- Dental insurance
- Vision insurance
- Life Insurance
- 401(k) Retirement Plan with matching
- Paid Time Off
- Paid Federal Holidays
- Must be a U.S. Citizen.
- Masters’s Degree or Above in Systems Engineering, Computer Science or related field.
- An active security clearance or the ability to obtain one is required.
- Minimum 10+ years of experience to include:
- Experience in data management, utilizing advanced analytics tools and platforms and Python.
- Experience with Data Warehousing consulting/engineering or related technologies (Redshift, Databricks, Big Query, OADW, Apache Hive, Apache Lucene).
- Experience in scripting, tooling, and automating large-scale computing environments.
- Extensive experience with major tools such as Python, Pandas, PySpark, Num Py, Sci Py, SQL, and Git;
Minor experience with Tensor Flow, PyTorch, and Scikit-learn.
- Data modeling (conceptual, logical, and physical)
- Database schema design
- Understanding of different database paradigms (relational, No
SQL, graph databases, etc.) - ETL (Extract, Transform, Load) processes and tools
- Experience with modern data warehousing solutions (e.g., Redshift, Snowflake, Big Query)
- Understanding of dimensional modeling (star/snowflake schemas) and data vault techniques.
- Experience designing for both OLTP and OLAP workloads.
- Familiarity with metadata-driven design and schema evolution in data systems.
- Experience defining data SLAs and lifecycle management policies.
- Project
Experience:
Designing and implementing scalable data architectures that support business intelligence, analytics, and machine learning workflows.
- Proficiency in tools like Apache Kafka, Airflow, Spark, Flink, or Ni Fi
- Experience with cloud-based data services (AWS Glue, Google Cloud Dataflow, Azure Data Factory)
- Real-time and batch data processing
- Automation and monitoring of data pipelines
- Strong understanding of incremental processing, idempotency, and backfill strategies.
- Knowledge of workflow dependency management, retries, and alerting.
- Experience writing modular, testable, and reusable Python-based ETL code.
- Project
Experience:
Leading the development of highly available, fault-tolerant, and scalable data pipelines, integrating multiple data sources, and ensuring data quality.
- Expertise in cloud environments (AWS, GCP, Azure)
- Understanding of cloud-based storage (S3, Blob Storage), databases (RDS, Dynamo
DB), and compute resources - Implementing cloud-native data solutions (Data Lake, Data Warehouse, Data Mesh)
- Experience with cost monitoring and optimization for data workloads.
- Familiarity with hybrid and multi-cloud architectures.
- Understanding of serverless data patterns (e.g., Lambda + S3 + Athena, Cloud Functions + Big Query).
Project
Experience:
Migrating legacy data infrastructure to the cloud or developing new data platforms using cloud services, with a focus on cost efficiency and scalability.
- Experience with big data ecosystems (Hadoop, HDFS, Hive, Spark)
- Distributed computing, parallel processing, and…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).