Data Engineer
Listed on 2026-02-08
-
IT/Tech
Data Engineer, Cloud Computing, Big Data
Creative Information Technology Inc. (CITI) is an esteemed IT enterprise renowned for its exceptional customer service and innovation. We serve both government and commercial sectors, offering a range of solutions such as Healthcare IT, Human Services, Identity Credentialing, Cloud Computing, and Big Data Analytics. With clients in the US and abroad, we hold key contract vehicles, including GSA IT Schedule 70, NIH CIO-SP3, GSA Alliant, and DHS Eagle II.
Join us in driving growth and seizing new business opportunities!
OverviewPosition
Description:
Background:
Maryland Department of Health is seeking a hands-on Data Engineer to design, develop, and optimize large-scale data pipelines in support of our Enterprise Data Warehouse (EDW) and Data Lake solutions. This role requires deep technical expertise in coding, pipeline orchestration, and cloud-native data engineering on AWS. The Data Engineer will be directly responsible for implementing ingestion, transformation, and integration workflows — ensuring data is high-quality, compliant, and analytics-ready.
This role may support other projects or teams within MDH as needed.
Responsible for designing, building, and maintaining data pipelines and infrastructure to support data-driven decisions and analytics. The individual is responsible for the following tasks:
A. Design, develop and maintain data pipelines, and extract, transform, load (ETL) processes to collect, process and store structured and unstructured data
B. Build data architecture and storage solutions, including data lake houses, data lakes, data warehouse, and data marts to support analytics and reporting
C. Develop data reliability, efficiency, and qualify checks and processes
E. Monitor and optimize data architecture and data processing systems
F. Collaboration with multiple teams to understand requirements and objectives
G. Administer testing and troubleshooting related to performance, reliability, and scalability
H. Create and update documentation
Duties / Responsibilities
Responsibilities- Design, code, and deploy ETL/ELT pipelines across bronze, silver, and gold layers of the Data Lakehouse.
- Build ingestion pipelines for structured (SQL), semi-structured (JSON, XML), and unstructured data using PySpark/Python programming language using AWS Glue or EMR.
- Implement incremental loads, deduplication, error handling, and data validation.
- Actively troubleshoot, debug, and optimize pipelines for scalability and cost efficiency.
- Develop dimensional data models (Star Schema, Snowflake Schema) for analytics and reporting.
- Build and maintain tables in Iceberg, Delta Lake, or equivalent OTF formats.
- Optimize partitioning, indexing, and metadata for fast query performance.
- Build ingestion and transformation pipelines for EDI X12 transactions (837, 835, 278, etc.).
- Implement mapping and transformation of EDI data with FHIR and HL7 frameworks.
- Work hands-on with AWS Health Lake (or equivalent) to store and query healthcare data.
- Develop automated validation scripts to enforce data quality and integrity.
- Implement IAM roles, encryption, and auditing to meet HIPAA and CMS compliance standards.
- Maintain lineage and governance documentation for all pipelines.
- Work closely with the Lead Data Engineer, analysts, and data scientists to deliver pipelines that support enterprise-wide analytics.
- Actively contribute to CI/CD pipelines, Infrastructure-as-Code (IaC), and automation.
- Continuously improve pipelines and adopt new technologies where appropriate.
Specialized experience: The candidate should have experience as data engineer or similar role with a strong understanding of data architecture and ETL processes. The candidate should be proficient in programming languages for data processing and knowledgeable of distributed computing and parallel processing.
- 3+ years hands-on experience in building, deploying, and maintaining data pipelines on AWS or equivalent cloud platforms.
- Strong coding skills in Python and SQL (Scala or Java a plus).
- Proven experience with Apache Spark (PySpark) for large-scale processing.
- Hands-on experience with AWS Glue, S3,…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).