×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

IT Data Engineer

Job in Midland, Midland County, Michigan, 48640, USA
Listing for: Dow
Full Time position
Listed on 2025-12-26
Job specializations:
  • IT/Tech
    Data Engineer
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below

At Dow, we believe in putting people first and we’re passionate about delivering integrity, respect and safety to our customers, our employees and the planet.

Our people are at the heart of our solutions. They reflect the communities we live in and the world where we do business. Their diversity is our strength. We’re a community of relentless problem solvers that offers the daily opportunity to contribute with your perspective, transform industries and shape the future. Our purpose is simple - to deliver a sustainable future for the world through science and collaboration.

If you’re looking for a challenge and meaningful role, you’re in the right place. Dow (NYSE: DOW) is one of the world’s leading materials science companies, serving customers in high-growth markets such as packaging, infrastructure, mobility and consumer applications. Our global breadth, asset integration and scale, focused innovation, leading business positions and commitment to sustainability enable us to achieve profitable growth and help deliver a sustainable future.

We operate manufacturing sites in 30countries and employ approximately
36,000 people. Dow delivered sales of approximately $43 billionin 2024. References to Dow or the Company mean Dow Inc. and its subsidiaries. Learn more about us and our ambition to be the most innovative, customer-centric, inclusive and sustainable materials science company in the world by visiting

About you and this role

Dow has an exciting opportunity for a Data Engineer located in Midland, MI or Houston, TX. This role will make significant technical contributions to critical data initiatives within our team  will be responsible for driving the technical implementation and contributing to the design of scalable, Gold-layer data products on the Azure Databricks Lakehouse Platform.

This role focuses on solving complex technical challenges, optimization, architecture contribution, and reliability, ensuring our datasets are performant and ready to power advanced use cases, including:

  • Machine Learning (ML) Pipelines
  • Real-Time Data Consumption
  • Generative and Agentic AI Systems
  • Core Enterprise Reporting and BI
  • Data-driven Applications
Responsibilities
  • Technical Design Contribution:
    Collaborate with senior data engineers to translate complex business requirements and ambiguous problem statements into clear, robust, and scalable technical designs and data models (e.g., dimensional modeling, star schemas), and independently drive the implementation of these designs.
  • Performance Optimization:
    Design, build, and deploy high-volume data transformation logic using highly optimized PySpark. You will apply advanced techniques to tune Spark jobs and diagnose performance bottlenecks to ensure maximum efficiency and minimal cloud compute cost.
  • Architecture & Deployment:
    Contribute significantly to the design and improvement of CI/CD pipelines in Azure Dev Ops/Git, ensuring reliable, automated, and secure deployment of data solutions across environments.
  • Diverse Data Integration:
    Deeply understand and connect to various source systems, demonstrating proficiency in managing data persistence and query performance across diverse technologies like SQL Server, Neo4j, and Cosmos

    DB.
  • Quality & Governance:
    Proactively implement and maintain advanced data quality frameworks (e.g., Delta Live Tables, Great Expectations) and monitoring solutions to ensure data reliability for mission-critical applications.
  • Collaboration & Mentorship:
    Serve as a go‑to technical resource for peers, conducting technical code reviews and informally mentoring Associate Data Engineers on PySpark and Databricks best practices.

A successful candidate will possess the experience and technical depth required to independently implement and optimize complex data solutions:

  • Core Technical Expertise (2-5 Years Demonstrated Experience)
  • PySpark and Distributed Processing:
    Proven ability to write highly optimized, production-grade PySpark/Spark code. Experience identifying and resolving performance bottlenecks in a distributed computing environment.
  • Advanced Data Modeling:
    Practical experience designing and implementing analytical data models (e.g., dimensional…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary