×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Data Engineer

Job in Auburn Hills, Oakland County, Michigan, 48326, USA
Listing for: Stellantis
Full Time position
Listed on 2026-02-08
Job specializations:
  • Software Development
    Data Engineer
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below

Overview

The AI & Data Analytics Team is looking for a Senior Data Engineer to join our team. In this role, you will be responsible for designing, building, and optimizing robust data pipelines that process massive datasets in both batch and real-time. You will work at the intersection of software engineering and data science, ensuring that our data architecture is scalable, reliable, and follows industry best practices.

Responsibilities
  • Pipeline Development:
    Design and implement complex data processing pipelines using Apache Spark.
  • Architectural Leadership:
    Build scalable, distributed systems that handle high-throughput data streams and large-scale batch processing.
  • Infrastructure as Code:
    Manage and provision cloud infrastructure using Terraform.
  • CI/CD & Automation:
    Streamline development workflows by implementing and maintaining Git Hub Actions for automated testing and deployment.
  • Code Quality:
    Uphold rigorous software engineering standards, including comprehensive unit/integration testing, code reviews, and maintainable documentation.
  • Collaboration:

    Work closely with stakeholders to translate business requirements into technical specifications.
Required Qualifications
  • BA/BSc in Computer Science, Engineering, Mathematics, or a related technical discipline
  • 5+ years of experience in the data engineering and software development life cycle.
  • 4+ years of hands-on experience in building and maintaining production data applications, current experience in both relational and columnar data stores.
  • 4+ years of hands-on experience working with AWS cloud services
  • Comprehensive experience with one or more programming languages such as Python, Java, or Rust
  • Comprehensive experience working with Big Data platforms (i.e., Spark, Google Big Query, Azure, AWS S3, etc.)
  • Familiarity with time series databases, data streaming applications, event driven architectures, Kafka, Flink, and more
  • Experience with workflow management engines (i.e., Airflow, Luigi, Azure Data Factory, etc.)
  • Experience with designing and implementing real-time pipelines
  • Experience with data quality and validation
  • Experience with API design
  • Distributed Computing:
    Deep expertise in Apache Spark (Core, SQL, and Structured Streaming).
  • Programming Mastery:
    Strong proficiency in Scala or Java. You should be comfortable building production-grade applications in a JVM-based environment.
  • SQL Proficiency:
    Advanced knowledge of SQL for data transformation, analysis, and performance tuning.
  • Dev Ops & Tools:
    Hands-on experience with Terraform for infrastructure management and Git Hub Actions for CI/CD pipelines.
  • Software Engineering Foundation:
    Solid understanding of data structures, algorithms, and design patterns. Experience applying  Clean Code  principles to data engineering.
  • Stream Processing:
    Experience with Apache Flink for low-latency stream processing.
  • Scripting:
    Proficiency in Python for automation, data analysis, or scripting.
  • Cloud Platforms:
    Experience with AWS, Azure, or GCP data services (e.g., EMR, Glue, Databricks).
  • Data Modeling:
    Familiarity with dimensional modeling, Lakehouse architectures (Delta Lake, Iceberg), or No

    SQL databases.
Preferred Experience
  • Comprehensive knowledge of relational database concepts, including data architecture, operational data stores, interface processes, multidimensional modeling, master data management, and data manipulation
  • Expert knowledge and experience with custom ETL design, implementation and maintenance
  • Comprehensive experience designing, implementing, and iterating data pipelines using Big Data technologies
  • Certification in AWS or other cloud providers
  • Experience with Databricks notebook workflows
  • Experience with Terraform
Benefits

Salaried Employee Benefits (US, Non-Represented)

Health & Wellbeing:
Comprehensive coverages encompassing the Physical, Mental, Emotional, and overall Wellbeing of our employees, including short- and long-term disability.

Compensation, Savings, and Retirement:
Annual Incentive Plan (SAIP), 401k with Employer Match & Contribution (max 8%), SoFi Student Loan Refinancing.

Time Away from Work:
Paid time includes company holidays, vacation, and Float/Wellbeing Days.

Family Benefits: 12 Weeks paid Parental Leave, Domestic Partner Benefits, Family Building Benefit, Marketplace, Life/Disability and other Insurances.

Professional Growth:
Annual training, tuition reimbursement and discounts, Business Resource & Intra-professional Groups.

Company Car & More:
Comprehensive Company Car Program and Vehicle Discounts. Vehicle discounts include family and friends.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary