More jobs:
Data Pipeline Engineer/Irving/Charlotte/Hybrid
Job in
Irving, Dallas County, Texas, 75084, USA
Listed on 2026-02-16
Listing for:
Motion Recruitment
Full Time
position Listed on 2026-02-16
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing
Job Description & How to Apply Below
Overview
Outstanding long-term contract opportunity! A well-known Financial Services Company is looking for a Data Pipeline Engineer in Irving TX or Charlotte NC (Hybrid).
We are seeking an experienced Data Pipeline Engineer to design, architect, and maintain scalable data pipelines that support reporting and downstream applications. The ideal candidate will bring strong expertise in cloud-based and open-source data technologies, modern data lake architectures, and data engineering best practices.
Contract Duration: 18 Months
Required Skills & Experience- Expertise in SQL for data analysis, transformation, and performance tuning.
- Strong hands-on experience with Python and Spark for large-scale data processing.
- Experience with data pipelining and orchestration tools (e.g., Airflow, Cloud Composer, or equivalent).
- Solid understanding of databases, data warehousing, and data lake architectures.
- Proven experience designing and architecting data pipelines for analytics and reporting.
- Experience working with legacy ETL tools such as SSIS, Ab Initio, SAS, or equivalent.
- Strong analytical, critical thinking, and problem-solving skills.
- Ability to adapt quickly to evolving technologies and business requirements.
- Expertise in cloud platforms, with Google Cloud Platform (GCP) strongly preferred.
- Experience building cloud-native data lake solutions using open-source technologies.
- Design, architect, and implement scalable data pipelines for reporting and downstream applications using open-source tools and cloud platforms.
- Build and support cloud-based data lake architectures for both operational and analytical data stores.
- Apply strong database, SQL, and reporting concepts to design efficient, high-performance data solutions.
- Develop data processing and transformation logic using Python and Spark.
- Work with and interpret legacy ETL code and workflows from tools such as SSIS, Ab Initio, SAS, and similar technologies to support modernization and migration initiatives.
- Utilize modern data pipelining and orchestration tools to automate, monitor, and optimize data workflows.
- Troubleshoot data pipeline issues, ensure data quality, and optimize performance and reliability.
- Demonstrate critical thinking, adaptability to change, and strong problem-solving skills.
Rachel Le Clair
#J-18808-LjbffrTo View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×