×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Data Engineer

Job in Madison, Dane County, Wisconsin, 53774, USA
Listing for: FAC Services, LLC
Full Time position
Listed on 2026-01-01
Job specializations:
  • IT/Tech
    Data Engineer
Job Description & How to Apply Below

Want to build your career helping those who build the world?

At FAC Services, we handle the business side so architecture, engineering, and construction firms can focus on shaping the future. Our trusted, high-quality solutions empower our partners, and our people, to achieve excellence with integrity, precision, and a personal touch.

Job Purpose

FAC Services is investing in a modern data platform to enable trustworthy, timely, and scalable data for analytics, operations, and product experiences. The Data Engineer will design, build, and maintain core data pipelines and models for Power BI reporting, application programming interfaces (APIs), and downstream integrations. This role partners closely with Infrastructure, Quality Assurance (QA), the Database Administrator, and application teams to deliver production grade, automated data workflows with strong reliability, governance, observability, and Infrastructure as Code (IaC) for resource orchestration.

Primary

Responsibilities
  • Design and evolve canonical data models, marts, and lake/warehouse structures to support analytics, APIs, and applications.
  • Establish standards for naming, partitioning, schema evolution, and Change Data Capture (CDC).
  • Build resilient, testable pipelines across Microsoft Fabric Data Factory, notebooks (Apache Spark), and Lakehouse tables for batch and streaming workloads.
  • Design Lakehouse tables (Delta/Parquet) in One Lake. Optimize Direct Lake models for Power BI.
  • Implement reusable ingestion and transformation frameworks emphasizing modularity, idempotency, and performance.
Integration & APIs
  • Engineer reliable data services and APIs to feed web applications, Power BI, and partner integrations.
  • Publish consumer-facing data contracts (Swagger) and implement change-notification (webhooks/eventing).
  • Use semantic versioning for breaking changes and maintain a deprecation policy for endpoints and table schemas.
  • Ensure secure connectivity and least-privilege access in coordination with the DBA.
Infrastructure as Code (IaC) — Resource Orchestration
  • Resource Orchestration & Security:
    Author and maintain IaC modules to deploy and configure core resources.
  • Use Bicep/ARM (and, where appropriate, Terraform/Ansible) with CI/CD to promote changes across environments.
Dev Ops, CI/CD & Testing
  • Own CI/CD pipelines (Gitbased promotion) for data code, configurations, and infrastructure. Practice test-driven development with QA (unit, integration, regression) and embed data validations throughout pipelines; collaborate with the Data Quality Engineer to maximize coverage.
Observability & Reliability
  • Instrument pipelines and datasets for lineage, logging, metrics, and alerts; define Service Level Agreements (SLAs) for data freshness and quality.
  • Perform performance tuning (e.g., Spark optimization, partition strategies) and cost management across cloud services.
Data Quality & Governance
  • Implement rules for deduplication, reconciliation, and anomaly detection across environments (Microsoft Fabric Lakehouse and Power BI).
  • Contribute to standards for sensitivity labels, Role Based Access Control (RBAC), auditability, and secure data movement aligned with Infrastructure and Security.
  • Work cross functionally with Infrastructure, QA, and application teams; mentor peers in modern data engineering practices; contribute to documentation and knowledge sharing. Handoff to the Data Quality Engineer for release gating; coordinate with the Database Administrator on backups/restore posture, access roles, High Availability / Disaster Recovery (HA/DR), and source CDC readiness.
Qualifications

To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required.

Experience (Required)
  • 3+ years designing and operating production ETL/ELT pipelines and data models.
  • Apache Spark (Fabric notebooks, Synapse Spark pools, or Databricks).
  • Advanced T-SQL and Python; experience with orchestration, scheduling, and dependency management.
  • API design for data services (REST/OpenAPI), including versioning, pagination, error handling, authentication, and authorization.
Experience (Preferred)
  • Lakehouse design patterns on Microsoft Fabric; optimization of Power BI with Direct Lake models.
  • Kusto Query Language (KQL), Event stream and Eventhouse familiarity.
  • Experience with lineage/metadata platforms and cost governance.
Seniority level

Mid-Senior level

Employment type

Full-time

Job function

Information Technology and Engineering

Referrals increase your chances of interviewing at FAC Services, LLC by 2x

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary