More jobs:
Data Engineer; Fabric, PySpark, Salesforce - Remote
Remote / Online - Candidates ideally in
Norfolk, Virginia, 23500, USA
Listed on 2025-12-23
Norfolk, Virginia, 23500, USA
Listing for:
Conexess Group
Remote/Work from Home
position Listed on 2025-12-23
Job specializations:
-
IT/Tech
Data Engineer, Cloud Computing
Job Description & How to Apply Below
Data Engineer
We are building a modern data platform in Microsoft Fabric and seeking a Senior/Principal Data Engineer to help design, implement, and scale it. This role is hands‑on with pipelines, notebooks, and CI/CD automation, while also playing a key role in architectural decisions and best practices. You will work alongside a team of senior engineers and architects, so collaboration, technical depth, and leadership are equally important.
Key Responsibilities- Engineering in Microsoft Fabric
- Develop, optimize, and maintain Fabric Data Pipelines for ingestion from on‑prem and cloud sources.
- Build PySpark notebooks to implement scalable transformations, merges/upserts, and medallion‑lakehouse patterns.
- Ensure reliability and performance in Lakehouse and Delta Lake design.
- Architecture & Design
- Contribute to the design and evolution of our Fabric‑based platform.
- Define standards and frameworks for schema management, versioning, governance, and data quality.
- Collaborate with peers to evaluate trade‑offs and guide enterprise‑scale architecture.
- Dev Ops & CI/CD
- Build and maintain deployment pipelines for Fabric artifacts (notebooks, pipelines, lake houses).
- Establish environment‑aware configuration and promotion workflows across Dev/QA/Prod.
- Drive automation to reduce manual effort and improve reliability.
- Mentorship
- Work as a peer‑leader with other senior engineers and architects to shape platform strategy.
- Mentor other engineers and contribute to building a strong engineering culture.
- 7+ years in data engineering, with proven impact in enterprise environments.
- Strong hands‑on expertise in Microsoft Fabric (Pipelines, Lakehouse, Notebooks, One Lake).
- Advanced PySpark skills for data processing at scale.
- Expertise in Delta Lake, medallion architecture, schema evolution, and data modeling.
- Experience with CI/CD for data engineering, including Fabric asset deployments.
- Strong SQL and experience with SQL Server/Azure SQL.
- Experience helping launch or scale Microsoft Fabric adoption.
- Familiarity with data governance, lineage, and compliance frameworks.
- Knowledge of real‑time/streaming data patterns.
- Exposure to Salesforce, CRM, or DMS integrations.
- Excellent communication and collaboration skills for working with peer‑level experts.
- In depth knowledge of the Azure ecosystem (API Management, Azure Functions, etc.).
Seniority level:
Mid‑Senior level
Employment type:
Full‑time
Job function:
Information Technology
Industry: IT Services and IT Consulting
#J-18808-LjbffrTo View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×