×
Register Here to Apply for Jobs or Post Jobs. X

Azure Data Engineer

Job in Camden, Camden County, New Jersey, 08100, USA
Listing for: Subaru of America
Full Time position
Listed on 2026-01-04
Job specializations:
  • Engineering
    Data Engineer, Data Science Manager
Job Description & How to Apply Below

Company Background

Love. It’s what makes Subaru, Subaru®. As a leading auto brand in the US, we strive to be More Than a Car Company®. Subaru believes in being a positive force in the communities in which we live and work, not just with donations but with actions that set an example for others to follow. That’s what we call our Subaru Love Promise®.

Subaru is a globally renowned automobile manufacturer known for its commitment to innovation, safety, and sustainability. With a rich history dating back to 1953, Subaru has consistently pushed the boundaries of automotive engineering to deliver vehicles that offer not only exceptional performance but also a unique blend of utility and adventure. Subaru’s company culture is built on collaboration, diversity, and a shared passion for our product.

We foster an inclusive environment that encourages employees to bring their unique perspectives and talents to the table. Our team members are driven by a common goal: to create exceptional vehicles that inspire and delight our customers.

Role Summary

The Azure Data Analytic Engineer will be the AZURE SME tasked with the development and optimization of cloud‑based Business Intelligence solutions. This role advances data analytics capabilities and drives innovative solutions. The engineer possesses deep technical expertise in data engineering and plays an instrumental role in managing data integrations from on‑premises Oracle systems, Cloud CRM (Dynamics), and telematics. Collaboration with Data Science, Enterprise Data Warehouse teams, and business stakeholders is essential.

Primary

Responsibilities Data Ingestion and Storage
  • Designs, develops, and maintains scalable, efficient data pipelines using Data Factory and Databricks, leveraging PySpark for complex data transformations and large‑scale processing.
  • Builds and manages extract, transform, and load (ETL)/extract, load, transform (ELT) processes to seamlessly extract, transform, and load data from on‑premises Oracle systems, customer relationship management (CRM) technology, and connected vehicles into data storage solutions, such as Azure Data Lake Storage and Azure SQL Database.
Data Engineering
  • Creates high‑code data engineering solutions using Databricks to clean, transform, and prepare data for in‑depth analysis.
  • Develops and manages data models, schemas, and data warehouses, utilizing Lakehouse Architecture to enhance advanced analytics and business intelligence.
  • Leverages Unity Catalog to ensure unified data governance and management across the enterprise’s data assets.
  • Optimizes data storage, retrieval strategies, and query performance to drive scalability and efficiency in all data operations.
Data Integration
  • Integrates and harmonizes data from diverse sources including on‑premises databases, cloud services, APIs, and connected vehicle telematics.
  • Ensures consistent data quality, accuracy, and reliability across all integrated data sources.
Git Hub Development
  • Utilizes Git Hub for version control and collaborative development, implementing best practices for code management, testing, and deployment.
  • Develops workflows for continuous integration (CI) and continuous deployment (CD), ensuring efficient delivery and maintenance of data solutions.
Additional Responsibilities
  • Works closely with Data Science, Enterprise Data Warehouse, and Data Visualization teams, as well as business stakeholders, to understand data requirements and deliver innovative solutions.
  • Collaborates with cross‑functional teams to troubleshoot and resolve data infrastructure issues, identifying and addressing performance bottlenecks.
  • Provides technical leadership, mentorship, and guidance to junior data engineers, promoting a culture of continuous improvement and innovation.
Required Skills and Personal Qualifications
  • Technical Expertise:
    Extensive experience with Azure Data Factory, Databricks, and Azure Synapse, as well as proficiency in Python and PySpark.
  • Data Integration:
    Experience integrating data from on‑premises Oracle systems and connected vehicle data into cloud‑based solutions.
  • Lakehouse Architecture & Governance:
    Deep knowledge of Lakehouse Architecture and Unity Catalog for…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary