More jobs:
Sensitive-Data Engineer
Job in
Chantilly, Fairfax County, Virginia, 22021, USA
Listed on 2026-02-19
Listing for:
Equilibriumtech
Full Time
position Listed on 2026-02-19
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing
Job Description & How to Apply Below
Active Top Secret/SCI Clearance with Polygraph (REQUIRED)
Are you passionate about harnessing data to solve some of the nation’s most critical challenges? Do you thrive on innovation, collaboration, and building resilient solutions in complex environments? Join a high-impact team at the forefront of national security, where your work directly supports mission success.
We're seeking a Data Engineer with a rare mix of curiosity, craftsmanship, and commitment to excellence. In this role, you’ll design and optimize secure, scalable data pipelines while working alongside elite engineers, mission partners, and data experts to unlock actionable insights from diverse datasets.
Requirements- Engineer robust, secure, and scalable data pipelines using Apache Spark, Apache Hudi, AWS EMR, and Kubernetes
- Maintain data provenance and access controls to ensure full lineage and auditability of mission-critical datasets
- Clean, transform, and condition data using tools such as dbt, Apache NiFi, or Pandas
- Build and orchestrate repeatable ETL workflows using Apache Airflow, Dagster, or Prefect
- Develop API connectors for ingesting structured and unstructured data sources
- Collaborate with data stewards, architects, and mission teams to align on data standards, quality, and integrity
- Provide advanced database administration for Oracle, Postgre
SQL, Mongo
DB, Elasticsearch, and others - Ingest and analyze streaming data using tools like Apache Kafka, AWS Kinesis, or Apache Flink
- Perform real-time and batch processing on large datasets in secure cloud environments (e.g., AWS Gov Cloud, C2S)
- Implement and monitor data quality and validation checks using tools such as Great Expectations or Deequ
- Work across agile teams using Dev Sec Ops practices to build resilient full-stack solutions with Python, Java, or Scala
- Experience building and maintaining data pipelines using Apache Spark, Airflow, NiFi, or dbt
- Proficiency in Python, SQL, and one or more of:
Java, Scala - Strong understanding of cloud services (especially AWS and Gov Cloud), including S3, EC2, Lambda, EMR, Glue, Redshift, or Snowflake
- Hands-on experience with streaming frameworks such as Apache Kafka, Kafka Connect, or Flink
- Familiarity with data lakehouse formats (e.g., Apache Hudi, Delta Lake, or Iceberg)
- Experience with No
SQL and RDBMS technologies such as Mongo
DB, Dynamo
DB, Postgre
SQL, or MySQL - Ability to implement and maintain data validation frameworks (e.g., Great Expectations, Deequ)
- Comfortable working in Linux/Unix environments, using bash scripting, Git, and CI/CD tools
- Knowledge of containerization and orchestration tools like Docker and Kubernetes
- Collaborative mindset with experience working in Agile/Scrum environments using Jira, Confluence, and Git-based workflows
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×