×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer II-Promo Analytics

Job in Menomonee Falls, Waukesha County, Wisconsin, 53051, USA
Listing for: Milwaukee Electric Tool Corporation
Full Time position
Listed on 2026-01-11
Job specializations:
  • IT/Tech
    Data Engineer, Data Science Manager
Job Description & How to Apply Below

Data Engineer II – Promo Analytics

Applicants must be authorized to work in the U.S.;
Sponsorship is not available for this position.

INNOVATE without boundaries!

At Milwaukee Tool we firmly believe that our People and our Culture are the secrets to our success‑so we give you unlimited access to everything you need to provide support to your business unit. Behind our doors you'll be empowered every day to own it, drive it, and do what it takes to support the biggest breakthroughs in the industry. Meanwhile, you'll have the support and resources of the fastest‑growing brand in the construction industry to make it happen.

Your

Role on Our Team:

As a Data Engineer II
, you will play a critical role in enabling fast, accurate, and scalable data‑driven decisions at Milwaukee Tool. You will help build and evolve the pipelines, models, and governance frameworks that power analytics for retail promotions and enterprise‑wide initiatives. Partnering with business teams and Data Platform engineers, you will turn requirements into high‑quality data products using Databricks and modern cloud technologies.

Your work ensures that teams have timely, reliable, and well‑structured data to support operational reporting, strategic insights, and advanced analytics.

You’ll be DISRUPTIVE through these duties and responsibilities:
  • Design and build scalable data pipelines to ingest, transform, and curate data from a variety of systems including APIs, databases, files, and event streams.
  • Review functional requirements & design specs with Senior Data Engineers and business partners and converting to data transformations.
  • Implement and maintain data models such as dimensional models, star schemas, normalized models, and data vault approaches to support analytics and BI.
  • Work with Data Architects and Data Leads to optimize cloud‑based data platforms, ensuring performance, reliability, and cost‑efficient execution of data workloads.
  • Develop and enforce data quality checks, lineage, and monitoring to ensure accuracy, completeness, and trust in enterprise datasets.
  • Leverage your expertise within the software development lifecycle, continuous improvement, and best practices to help drive the team towards rapid success.
  • Automate and operationalize data pipelines using CI/CD, Infrastructure as Code, and modern orchestration tools.
  • Profile, tune, and optimize SQL, Python, and Spark workloads running in Databricks.
  • Author technical documentation, promote reusable components, and contribute to engineering standards and best practices.
  • Troubleshoot pipeline issues, participate in root‑cause analysis, and help maintain healthy, reliable data operations.
  • Perform other duties as assigned.
The TOOLS you’ll bring with you:
  • Bachelor's degree in Computer Science, Information Systems or equivalent experience.
  • 3‑5 years of experience in data engineering or a related technical field.
  • Strong proficiency in SQL and a programming language such as Python (preferred).
  • Experience building and orchestrating data workflows in Databricks, including Delta Lake, notebooks, jobs, and workflows.
  • Hands‑on experience with distributed data processing technologies such as Apache Spark.
  • Experience with cloud data ecosystems (Azure, AWS, or GCP), especially Azure Databricks.
  • Familiarity with cloud data warehouses such as Snowflake, Synapse, Redshift, or Big Query.
  • Experience working with structured and semi‑structured data (Parquet, Avro, JSON, Delta).
  • Strong understanding of version control (Git) and modern CI/CD workflows.
  • Strong problem solving, debugging, and analytical skills.
  • Ability to work effectively in agile, cross‑functional engineering teams.
Other TOOLS we prefer you to have:
  • Experience with Databricks Unity Catalog, Delta Live Tables, or Databricks Workflows.
  • Data Ops experience (pipeline observability, monitoring, automated quality).
  • Knowledge of metadata management or cataloging platforms (Purview, Collibra, Alation).
  • Experience with ML pipelines and feature engineering in Databricks.
  • Familiarity with streaming frameworks (Kafka, Event Hubs, Kinesis) used with Spark Structured Streaming.
  • Knowledge and experience working in an Agile environment.
  • Experience working with retail…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary