Data Engineer
Listed on 2025-12-22
-
IT/Tech
Data Engineer, Cloud Computing
Sustainable
HR PEO & Recruiting provided pay range
This range is provided by Sustainable
HR PEO & Recruiting. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.
$90,000.00/yr - $/yr
We’re partnering with a Madison-based organization that’s investing heavily in a modern data platform to deliver reliable, scalable, and well-governed data across analytics, applications, and integrations.
This Data Engineer will play a key role in designing and building production-grade data pipelines, models, and services that support Power BI reporting, APIs, and downstream systems. You’ll collaborate closely with Infrastructure, QA, Database Administration, and application teams to deliver automated, observable, and secure data workflows.
This is a hybrid role. Candidates must be located in the Madison, WI area and able to work on-site as needed.
What You’ll Do- Design and evolve canonical data models, data marts, and lake/warehouse structures
- Establish standards for schema design, naming conventions, partitioning, and CDC
- Build resilient batch and streaming pipelines using Microsoft Fabric Data Factory, Spark notebooks, and Lakehouse tables
- Design and optimize Delta/Parquet tables in One Lake and Direct Lake models for Power BI
- Create reusable ingestion and transformation frameworks focused on performance and reliability
- Develop secure data services and APIs supporting applications, reporting, and partner integrations
- Define and publish data contracts (OpenAPI/Swagger) with versioning and deprecation standards
- Partner with DBA and Infrastructure teams to enforce least-privilege access
- Author and maintain IaC modules using Bicep/ARM (and where appropriate, Terraform or Ansible)
- Own CI/CD pipelines for data, configuration, and infrastructure changes
- Collaborate with QA on unit, integration, and regression testing across data workflows
- Implement logging, lineage, metrics, and alerting for pipelines and datasets
- Define SLAs for data freshness and quality
- Tune Spark performance and manage cloud costs
- Apply data quality rules, RBAC, sensitivity labeling, and audit standards
- Work cross-functionally with Infrastructure, QA, DBA, and application teams
- Contribute to documentation, knowledge sharing, and modern data engineering best practices
- 3+ years building and operating production ETL/ELT pipelines
- Apache Spark experience (Microsoft Fabric, Synapse, or Databricks)
- Strong T-SQL and Python skills
- Streaming platforms such as Azure Event Hubs or Kafka
- Change Data Capture (CDC) implementations
- Infrastructure as Code and CI/CD (Azure Dev Ops)
- API design for data services (REST/OpenAPI, versioning, authentication)
- Microsoft Fabric Lakehouse architecture and Power BI Direct Lake optimization
- Kusto Query Language (KQL), Event stream, or Eventhouse exposure
- Experience with data lineage, metadata, or cost governance tools
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).