Data Pipeline Engineer
Listed on 2026-02-16
-
IT/Tech
Data Engineer, Cloud Computing
Outstanding long-term contract opportunity! A well-known Financial Services Company is looking for a Data Pipeline Engineer in Irving TX or Charlotte NC
Work with the brightest minds at one of the largest financial institutions in the world. This is a long-term contract opportunity that includes a competitive benefit package! Our client has been around for over 150 years and is continuously innovating in today's digital age. If you want to work for a company that is not only a household name, but also truly cares about satisfying customers' financial needs and helping people succeed financially, apply today.
Contract Duration: starting at 6 months with possible extensions or full time conversion
We are seeking an experienced Data Pipeline Engineer
to design, architect, and maintain scalable data pipelines that support reporting and downstream applications. The ideal candidate will bring strong expertise in cloud-based and open-source data technologies, modern data lake architectures, and data engineering best practices.
- Expertise in SQL
for data analysis, transformation, and performance tuning. - Strong hands-on experience with Python and Spark
for large-scale data processing. - Experience with data pipelining and orchestration tools
(e.g., Airflow, Cloud Composer, or equivalent). - Solid understanding of databases, data warehousing, and
data lake architectures
. - Proven experience designing and architecting data pipelines for analytics and reporting.
- Experience working with legacy ETL tools such as
SSIS, Ab Initio, SAS
, or equivalent. - Strong analytical, critical thinking, and problem-solving skills.
- Ability to adapt quickly to evolving technologies and business requirements.
- Expertise in cloud platforms, with
Google Cloud Platform (GCP)strongly preferred. - Experience building cloud-native data lake solutions using open-source technologies.
- Design, architect, and implement scalable data pipelines for reporting and downstream applications using open-source tools and cloud platforms.
- Build and support cloud-based
data lake architectures
for both operational and analytical data stores. - Apply strong database, SQL, and reporting concepts to design efficient, high-performance data solutions.
- Develop data processing and transformation logic using
Python and Spark
. - Work with and interpret legacy ETL code and workflows from tools such as
SSIS, Ab Initio, SAS
, and similar technologies to support modernization and migration initiatives. - Utilize modern
data pipelining and orchestration tools
to automate, monitor, and optimize data workflows. - Troubleshoot data pipeline issues, ensure data quality, and optimize performance and reliability.
- Demonstrate critical thinking, adaptability to change, and strong problem-solving skills.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).