×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Architect

Job in 243601, Gurgaon, Uttar Pradesh, India
Listing for: EXL
Full Time position
Listed on 2026-02-18
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Job Description & How to Apply Below
About the Role

We are seeking a highly skilled Data Architect with deep expertise in the Microsoft Azure ecosystem and Microsoft Fabric to design, build, and optimize scalable data pipelines and Medallion lakehouse architectures. The candidate will provide thought leadership in transitioning legacy environments to unified Fabric work spaces, delivering robust, secure, and high-performance solutions that empower AI and analytics.

Key Responsibilities

- Fabric & Lakehouse Implementation:
Design and maintain unified data environments using Microsoft Fabric (One Lake, Lake houses, and Warehouses) and Azure Synapse.
- Pipeline Orchestration:
Develop scalable data pipelines using Fabric Data Factory and Azure Data Factory to ingest data from diverse sources (APIs, On-premises, Cloud).
- Architecture Optimization:
Build and optimize Medallion architectures (Bronze/Silver/Gold) using Delta Lake and Fabric Notebooks (Spark).
- Thought Leadership:
Work with stakeholders to translate business requirements into technical roadmaps, specifically advising on when to use Fabric vs. Azure Databricks.
- Unified Governance:
Leverage Microsoft Purview and Fabric’s native security features to ensure data quality, consistency, and RBAC/Sensitivity labeling.
- Infrastructure & Automation:
Implement Infrastructure as Code (IaC) using Terraform or Bicep for automated provisioning of Fabric capacities and Azure resources.
- Advanced Analytics Support:
Design Star Schemas and utilize Direct Lake mode in Power BI to provide high-performance reporting for data scientists and analysts.

Required Technical Skills

- Microsoft Fabric Ecosystem:
Expert-level knowledge of One Lake, Fabric Capacities, Lakehouse/Warehouse artifacts, and Shortcuts.
- Azure Data Services:
Proven experience with Azure Data Factory, Azure Databricks, and Azure Data Lake Storage Gen
2.
- Data Processing:
Strong proficiency in PySpark (Spark SQL & Data Frames) for complex transformations and performance tuning.

- Languages:

Advanced SQL and Python skills for pipeline development and orchestration.
- Dev Ops & IaC:

Experience with Terraform/Bicep and CI/CD practices using Azure Dev Ops or Git Hub Actions.
- Real-Time Data:

Experience with streaming data using Fabric Eventstreams or Azure Event Hubs.

Soft Skills & Mindset

- Strategic Thinking:
Ability to navigate the evolving Microsoft roadmap and choose the right tool for the specific scale/cost requirement.

- Collaboration:

Strong ability to bridge the gap between "Pro-code" engineering and "Low-code" analytics users.
- Communication:
Robust client management skills with the ability to explain complex cloud architectures to non-technical stakeholders.
Position Requirements
10+ Years work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary