Data Engineer
Listed on 2026-01-01
-
IT/Tech
Data Engineer, Cloud Computing
Overview
Data Engineer
On site 4 days a week in Dearborn, MI
W2 Only - no sponsorship offered for this position
Python, Cloud (preferably GCP), SQL, Apache Spark, Kafka, Data Mapping & Cataloging, CI/CD
Dexian is seeking a Data Engineering Engineer to design, build, and maintain scalable data infrastructure and pipelines in support of advanced product development and AI-driven initiatives. This is a hands-on, operations-focused backend role. The engineer will "hold the fort" operationally — ensuring pipelines are healthy, data is mapped and cataloged, and the right data is in the right place. You will work closely with another data engineer who is currently writing APIs, mapping elements, and building advanced Python scripts to normalize and pipeline data.
This role ensures the ongoing health and scalability of pipelines, while also enabling Dexian to industrialize AI applications and scale innovative models into enterprise-ready solutions.
- Collaborate with business, technology, and AI/ML teams to define data requirements and delivery standards
- Partner with another data engineer to manage backend operations, pipeline stability, and scaling
- Write advanced Python and SQL scripts for normalization, transformation, and cataloging of data
- Design, build, and maintain cloud-native ETL pipelines (Kafka, Spark, Beam, GCP Dataflow, Pub/Sub, Event Arc)
- Architect and implement data warehouses and unified data models to integrate siloed data sources
- Perform data mapping and cataloging to ensure accuracy, traceability, and consistency
- Automate pipeline orchestration, event-driven triggers, and infrastructure provisioning
- Troubleshoot and optimize data workflows for performance and scalability
- Support CI/CD processes with Git/Git Hub and Cloud Build
- Work cross-functionally to integrate new applications into existing data models and scale them effectively
- Advanced Python (scripting, backend operations, transformations)
- Advanced SQL (complex queries, backend optimization)
- Apache Spark
/
Kafka (large-scale ETL/data processing) - Cloud experience (GCP preferred, but AWS or Azure equally acceptable if adaptable)
- Data mapping and cataloging (governance, traceability, accuracy)
- Event-driven pipeline design (e.g., Event Arc, Pub/Sub, AWS equivalents)
- Data warehouse design & cloud-native storage (Big Query, Snowflake, Redshift)
- CI/CD pipeline tools (Git Hub, Cloud Build, or equivalents)
- Familiarity with data governance and orchestration practices
- Java or Powershell scripting
- Awareness of ML/AI concepts - curiosity and willingness to learn valued
- Approximately 5 years of data engineering experience
- Bachelor's Degree in Computer Science, Data Engineering, or related field (Master's in Data Science is a plus)
- Industry background flexible - non-automotive engineers can succeed if able to adapt quickly
- Ownership mentality: treats projects like their own, takes initiative without reminders
- Curious & proactive: stays current with new technologies, eager to learn
- Positive & collaborative: fits into a young, energetic team; keeps the atmosphere light but focused
- Problem solver: navigates cross-functional challenges and proposes actionable solutions
- Willing to put in extra effort (early/late work when needed) to help the team succeed
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).