More jobs:
Data Engineer
Job in
Dearborn, Wayne County, Michigan, 48120, USA
Listed on 2026-02-07
Listing for:
Brooksource
Full Time
position Listed on 2026-02-07
Job specializations:
-
IT/Tech
Data Engineer, Big Data
Job Description & How to Apply Below
We are seeking a Data Engineer to support large‑scale data initiatives, build modern data pipelines, and help transform legacy data systems into scalable, cloud‑based platforms. The ideal candidate has strong experience with ETL/ELT development, cloud technologies, big‑data processing, and enterprise data models. This role partners closely with architects, product teams, and business stakeholders to deliver high‑quality, governed, and reliable data solutions.
Responsibilities- Design, build, and maintain scalable data pipelines to ingest, transform, and deliver data across multiple sources and environments.
- Migrate data from legacy/on‑prem systems to modern cloud data platforms.
- Develop ETL/ELT workflows using tools such as Databricks, Spark, Glue, Airflow, Dataflow, or similar technologies.
- Build and optimize data models to support analytics, reporting, and application use cases.
- Work with structured, semi‑structured, and unstructured data (CSV, JSON, Parquet, APIs, streaming data).
- Collaborate with data architects and engineers to implement best practices in data architecture, quality, governance, and security.
- Troubleshoot and optimize data pipelines for performance, reliability, and cost.
- Implement data quality checks, monitoring, and alerting to ensure trust and consistency across environments.
- Support CI/CD pipelines for data engineering workflows and automate deployment processes.
- Participate in Agile ceremonies and work closely with product owners, analysts, and business partners.
- 3+ years of professional experience in data engineering or software engineering with strong data work.
- Hands‑on experience with ETL/ELT pipelines, big‑data processing frameworks, and data modeling.
- Strong proficiency in SQL and one programming language (Python, Java, or Scala).
- Experience working with at least one major cloud platform (AWS, Azure, or GCP).
- Familiarity with data warehousing concepts, distributed systems, and pipeline orchestration tools.
- Experience with version control tools (Git) and CI/CD pipelines.
- Strong understanding of data quality, lineage, metadata, and governance.
- Ability to troubleshoot complex data issues and work in a fast-paced, collaborative environment.
- Experience with Databricks, Spark, Snowflake, Big Query, Redshift, or Synapse.
- Background in large-scale data migrations or modernizing legacy systems.
- Experience with streaming technologies (Kafka, Pub/Sub, Kinesis, Event Hub).
- Exposure to MDM, data cataloging, and enterprise governance frameworks.
- Experience in highly regulated industries (automotive, finance, healthcare, etc.).
- Familiarity with containerization tools (Docker, Kubernetes).
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×