×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

IT Data Engineer

Job in Amarillo, Potter County, Texas, 79161, USA
Listing for: Pantex Plant
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    Data Engineer
  • Engineering
    Data Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below
Position: IT Applications Data Engineer

Location: Amarillo, TX - Pantex Plant

Job Title: IT Applications Data Engineer

Career Level From: Senior Associate

Career Level To: Senior Specialist

Organization: NSE Integration Services )

Job Specialty: Service Transition

What You'll Do

A career at Pantex can offer you the opportunity to make a personal impact on our nation. We recognize that excellent employees are absolutely critical for mission success. We are seeking a Data Engineer to design, build, and optimize the high-performance data pipelines and data infrastructure necessary to power enterprise-level Artificial Intelligence (AI), machine learning, and advanced analytics capabilities.

Primary duties include building and maintenance of data pipelines, hands‑on development of robust, scalable Extract, Load, and Transform (ELT)/Extract, Transfer and Load (ETL) processes, integrating diverse operational and analytical data sources, and ensuring the reliability and quality of all data assets used for modeling and reporting.

Additional duties may include performance tuning of centralized data platforms, implementing Data Ops methodologies, and collaborating closely with Data Architects and Data Scientists to translate data requirements into fully operationalized solutions.

Core Responsibilities And Duties
  • Build, test, and maintain highly scalable data pipelines for both batch and real‑time data consumption using cloud‑native services and distributed processing frameworks.
  • Construct and manage data systems within the modern data stack (e.g., Databricks, Snowflake, or equivalent cloud data warehouse/lakehouse solutions) to ensure optimal data accessibility and performance.
  • Write production‑grade code (primarily Python or Scala) to cleanse, transform, and load complex, high‑volume data into structured data models defined by the Data Architect.
  • Implement and manage data quality checks, data lineage tools, and pipeline monitoring to ensure data integrity and immediate detection of failures across the data platform.
  • Optimize and fine‑tune database performance, query efficiency, and cost management across data storage and processing services in a secure cloud environment.
  • Support Data Science teams by providing efficient access to data, developing feature engineering workflows, and integrating model‑ready data into the Machine Learning Operations (MLOps) platform.
Other Responsibilities And Duties
  • Participate in defining and enforcing Data Ops and Dev Sec Ops  practices for data pipeline automation, testing, and secure deployment.
  • Maintain comprehensive documentation of data flows, data dictionaries, and operational runbooks for all production data infrastructure.
  • Collaborate with application development teams and Application Programming Interface (API) developers to establish secure and efficient data ingestion points from source systems.
  • Work with Data Architects on modeling new data sources to ensure new data assets adhere to established enterprise data standards and architectures.
What You Can Expect
  • Meaningful work and unique opportunities to support missions vital to national and global security.
  • Top‑notch, dedicated colleagues.
  • Generous pay and benefits with a stable organization.
  • Work‑life balance fostered through flexible work options and wellness initiatives.
Minimum

Job Requirements
  • Bachelor’s degree in engineering/science discipline:
    Minimum 2 years of relevant experience. Typical engineering/science experience ranging from 3 to 7 years.
  • OR Applicants without a bachelor's degree may be considered based on a combination of at least 10 years of completed education and/or relevant experience.
Preferred

Job Requirements
  • Master’s degree (MS) in a relevant technical field with minimum 3 years of relevant experience.
  • Minimum of 4 years of relevant experience, with at least 3 years specifically focused on building and managing enterprise‑grade data pipelines and ELT/ETL processes.
  • Expert proficiency in Structured Query Language (SQL) and Python or Scala for data manipulation, transformation, and automation.
  • Demonstrated experience with distributed processing frameworks (e.g., Apache Spark, Databricks, Snowflake) and cloud‑based data services (e.g., Azure Data…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary