More jobs:
Data Engineer; Python/Pyspark
Job in
Dallas, Dallas County, Texas, 75215, USA
Listed on 2025-10-25
Listing for:
Anblicks
Full Time
position Listed on 2025-10-25
Job specializations:
-
IT/Tech
Data Engineer, Big Data, Data Science Manager, Data Analyst
Job Description & How to Apply Below
Overview
Data Engineer with strong proficiency in SQL, Python, and PySpark to support high-performance data pipelines and analytics initiatives. This role focuses on scalable data processing, transformation, and integration efforts that enable business insights, regulatory compliance, and operational efficiency. Data Engineer – SQL, Python and PySpark Expert (Onsite – Dallas, TX).
Responsibilities- Design, develop, and optimize ETL/ELT pipelines using SQL, Python, and PySpark for large-scale data environments
- Implement scalable data processing workflows in distributed data platforms (e.g., Hadoop, Databricks, or Spark environments)
- Partner with business stakeholders to understand and model mortgage lifecycle data (origination, underwriting, servicing, foreclosure, etc.)
- Create and maintain data marts, views, and reusable data components to support downstream reporting and analytics
- Ensure data quality, consistency, security, and lineage across all stages of data processing
- Assist in data migration and modernization efforts to cloud-based data warehouses (e.g., Snowflake, Azure Synapse, GCP Big Query)
- Document data flows, logic, and transformation rules
- Troubleshoot performance and quality issues in batch and real-time pipelines
- Support compliance-related reporting (e.g., HMDA, CFPB)
- Familiarize with ETL tools and orchestration frameworks (e.g., Airflow, ADF, dbt)
- 6+ years of experience in data engineering or data development
- Advanced expertise in SQL (joins, CTEs, optimization, partitioning, etc.)
- Strong hands-on skills in Python for scripting, data wrangling, and automation
- Proficient in PySpark for building distributed data pipelines and processing large volumes of structured/unstructured data
- Experience working with mortgage banking data sets (highly preferred)
- Strong understanding of data modeling (dimensional, normalized, star schema)
- Experience with cloud-based platforms (Azure Databricks, AWS EMR, GCP Dataproc)
- Familiarity with ETL tools and orchestration frameworks (Airflow, ADF, dbt)
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×