×
Register Here to Apply for Jobs or Post Jobs. X

Sr. Data Architect

Job in Vienna, Fairfax County, Virginia, 22184, USA
Listing for: SteerBridge
Full Time position
Listed on 2026-01-02
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 125000 - 150000 USD Yearly USD 125000.00 150000.00 YEAR
Job Description & How to Apply Below

Steer Bridge Strategies is a CVE-Verified Service-Disabled, Veteran-Owned Small Business (SDVOSB) delivering a broad spectrum of professional services to the U.S. Government and private sector. Backed by decades of hands-on experience in federal acquisition and procurement, we provide agile, best-in-class commercial solutions that drive mission success.

Our strength lies in our people—especially the veterans whose leadership, discipline, and dedication shape everything we do. At Steer Bridge, we don’t just hire talent—we empower it, creating meaningful career paths for those who have served and those who share our commitment to excellence.

Overview

We are seeking a highly skilled Sr. Data Architect to support operations and sustainment of the F-35 and C-130 aircraft. This role involves designing, implementing, and managing data systems that support aircraft maintenance, logistics, performance analysis, and mission readiness. The ideal candidate will have experience in aerospace data systems, strong analytical skills, and a deep understanding of data governance in a defense environment.

Benefits
  • Health insurance
  • Dental insurance
  • Vision insurance
  • Life Insurance
  • 401(k) Retirement Plan with matching
  • Paid Time Off
  • Paid Federal Holidays
Required
  • Must be a U.S. Citizen.
  • Masters’s Degree or Above in Systems Engineering, Computer Science or related field.
  • An active security clearance or the ability to obtain one is required.
  • Minimum 10+ years of experience to include:
  • Experience in data management, utilizing advanced analytics tools and platforms and Python.
  • Experience with Data Warehousing consulting/engineering or related technologies (Redshift, Databricks, Big Query, OADW, Apache Hive, Apache Lucene).
  • Experience in scripting, tooling, and automating large-scale computing environments.
  • Extensive experience with major tools such as Python, Pandas, PySpark, Num Py, Sci Py, SQL, and Git;
    Minor experience with Tensor Flow, PyTorch, and Scikit-learn.
Data Architecture and Design
  • Data modeling (conceptual, logical, and physical)
  • Database schema design
  • Understanding of different database paradigms (relational, No

    SQL, graph databases, etc.)
  • ETL (Extract, Transform, Load) processes and tools
  • Experience with modern data warehousing solutions (e.g., Redshift, Snowflake, Big Query)
  • Understanding of dimensional modeling (star/snowflake schemas) and data vault techniques.
  • Experience designing for both OLTP and OLAP workloads.
  • Familiarity with metadata-driven design and schema evolution in data systems.
  • Experience defining data SLAs and lifecycle management policies.
  • Project

    Experience:

    Designing and implementing scalable data architectures that support business intelligence, analytics, and machine learning workflows.
Data Pipeline Development
  • Proficiency in tools like Apache Kafka, Airflow, Spark, Flink, or Ni Fi
  • Experience with cloud-based data services (AWS Glue, Google Cloud Dataflow, Azure Data Factory)
  • Real-time and batch data processing
  • Automation and monitoring of data pipelines
  • Strong understanding of incremental processing, idempotency, and backfill strategies.
  • Knowledge of workflow dependency management, retries, and alerting.
  • Experience writing modular, testable, and reusable Python-based ETL code.
  • Project

    Experience:

    Leading the development of highly available, fault-tolerant, and scalable data pipelines, integrating multiple data sources, and ensuring data quality.
Cloud Platforms and Services
  • Expertise in cloud environments (AWS, GCP, Azure)
  • Understanding of cloud-based storage (S3, Blob Storage), databases (RDS, Dynamo

    DB), and compute resources
  • Implementing cloud-native data solutions (Data Lake, Data Warehouse, Data Mesh)
  • Experience with cost monitoring and optimization for data workloads.
  • Familiarity with hybrid and multi-cloud architectures.
  • Understanding of serverless data patterns (e.g., Lambda + S3 + Athena, Cloud Functions + Big Query).

Project

Experience:

Migrating legacy data infrastructure to the cloud or developing new data platforms using cloud services, with a focus on cost efficiency and scalability.

Big Data Technologies
  • Experience with big data ecosystems (Hadoop, HDFS, Hive, Spark)
  • Distributed computing, parallel processing, and…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary