IT Data Engineer
Listed on 2026-02-16
-
IT/Tech
Data Engineer -
Engineering
Data Engineer
Location: Amarillo, TX - Pantex Plant
Job Title: IT Applications Data Engineer
Career Level From: Senior Associate
Career Level To: Senior Specialist
Organization: NSE Integration Services )
Job Specialty: Service Transition
What You'll DoA career at Pantex can offer you the opportunity to make a personal impact on our nation. We recognize that excellent employees are absolutely critical for mission success. We are seeking a Data Engineer to design, build, and optimize the high-performance data pipelines and data infrastructure necessary to power enterprise-level Artificial Intelligence (AI), machine learning, and advanced analytics capabilities.
Primary duties include building and maintenance of data pipelines, hands‑on development of robust, scalable Extract, Load, and Transform (ELT)/Extract, Transfer and Load (ETL) processes, integrating diverse operational and analytical data sources, and ensuring the reliability and quality of all data assets used for modeling and reporting.
Additional duties may include performance tuning of centralized data platforms, implementing Data Ops methodologies, and collaborating closely with Data Architects and Data Scientists to translate data requirements into fully operationalized solutions.
Core Responsibilities And Duties- Build, test, and maintain highly scalable data pipelines for both batch and real‑time data consumption using cloud‑native services and distributed processing frameworks.
- Construct and manage data systems within the modern data stack (e.g., Databricks, Snowflake, or equivalent cloud data warehouse/lakehouse solutions) to ensure optimal data accessibility and performance.
- Write production‑grade code (primarily Python or Scala) to cleanse, transform, and load complex, high‑volume data into structured data models defined by the Data Architect.
- Implement and manage data quality checks, data lineage tools, and pipeline monitoring to ensure data integrity and immediate detection of failures across the data platform.
- Optimize and fine‑tune database performance, query efficiency, and cost management across data storage and processing services in a secure cloud environment.
- Support Data Science teams by providing efficient access to data, developing feature engineering workflows, and integrating model‑ready data into the Machine Learning Operations (MLOps) platform.
- Participate in defining and enforcing Data Ops and Dev Sec Ops practices for data pipeline automation, testing, and secure deployment.
- Maintain comprehensive documentation of data flows, data dictionaries, and operational runbooks for all production data infrastructure.
- Collaborate with application development teams and Application Programming Interface (API) developers to establish secure and efficient data ingestion points from source systems.
- Work with Data Architects on modeling new data sources to ensure new data assets adhere to established enterprise data standards and architectures.
- Meaningful work and unique opportunities to support missions vital to national and global security.
- Top‑notch, dedicated colleagues.
- Generous pay and benefits with a stable organization.
- Work‑life balance fostered through flexible work options and wellness initiatives.
Job Requirements
- Bachelor’s degree in engineering/science discipline:
Minimum 2 years of relevant experience. Typical engineering/science experience ranging from 3 to 7 years. - OR Applicants without a bachelor's degree may be considered based on a combination of at least 10 years of completed education and/or relevant experience.
Job Requirements
- Master’s degree (MS) in a relevant technical field with minimum 3 years of relevant experience.
- Minimum of 4 years of relevant experience, with at least 3 years specifically focused on building and managing enterprise‑grade data pipelines and ELT/ETL processes.
- Expert proficiency in Structured Query Language (SQL) and Python or Scala for data manipulation, transformation, and automation.
- Demonstrated experience with distributed processing frameworks (e.g., Apache Spark, Databricks, Snowflake) and cloud‑based data services (e.g., Azure Data…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).