×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Lead Azure Data Engineer & Architect

Job in Wilmington, New Castle County, Delaware, 19894, USA
Listing for: MetLife
Full Time position
Listed on 2026-01-01
Job specializations:
  • Software Development
    Data Engineer
Job Description & How to Apply Below

Role Value Proposition

The Met Life Corporate Functions Data Office is part of the Data and Analytics Organization (D&A) within GTO. Our mission is to implement scalable data solutions for our stakeholders to generate actionable insights. We partner with Technology, and our Business and functional partners, to build and deploy the next generation data solutions for Met Life as we head into our New Frontier strategy!

The Lead Big Data Engineer and Data Architect plays a critical role in big data development within the data analytics engineering organization of Met Life Data & Analytics. This position has the responsibility for architecture & design of data and analytics solutions, building ETL, data warehousing, and reusable components using cutting‑edge big data and cloud technologies. The role is based in Cart, NC, Tampa, FL, Wilmington, DE, or NY, NY, and supports Met Life’s commitment to data‑driven decision‑making and operational excellence.

Key Responsibilities
  • Design and solution end‑to‑end data architecture for data hubs/data products and data all the way from source systems to consumption via web applications and reporting & analytics solutions.
  • Ingest huge volumes of data from various platforms for Analytics needs and write high‑performance, reliable, and maintainable ELT code.
  • Collect, store, process, and analyze large datasets to build and implement extract, transfer, load (ETL) processes.
  • Develop reusable frameworks to reduce the development effort involved, thereby ensuring cost savings for the projects.
  • Develop quality code with thought‑through performance optimizations in place right at the development stage.
  • Appetite to learn new technologies and be ready to work on new cutting‑edge cloud technologies.
  • Work with teams spread across the globe in driving the delivery of projects and recommend development and performance improvements.
  • Extensive experience with various database types and knowledge to leverage the right one for the need.
  • Strong understanding of data tools and ability to leverage them to understand the data and generate insights.
  • Hands‑on experience in building/designing at‑scale Data Lake, Data warehouses, data stores for analytics consumption on‑prem and Cloud (real‑time as well as batch use cases).
  • Utilize Cloud technologies (preferably Azure) to enable PaaS‑centric enterprise solutions.
  • Implement solutions that support dynamic scaling, including throttling and bursting for high‑volume data workloads.
  • Establish and evangelize modern software development practices, including CI/CD, automated testing, and code quality standards.
  • Develop and support an API catalog for data services, ensuring standardization and security.
  • Optimize reusable frameworks, Spark jobs for performance and cost efficiency in large‑scale environments.
  • Ability to interact with business analysts and functional analysts in getting the requirements and implementing ETL solutions.
Essential Business Experience and Technical Skills Required
  • 10+ years of overall experience and delivery experience with 6+ years of recent experience in data engineering.
  • Bachelor’s/ master’s degree in information technology/computer science or a relevant domain.
  • Databricks certifications and/ or Microsoft Azure Certifications.
  • Strong analytic skills related to working with unstructured datasets.
  • Data architecture (traditional - examples include Oracle & SQL Server + modern - examples include AWS & Azure) and knowledge of data architecture patterns.
  • Strong experience in building/designing Data warehouses, data stores for analytics consumption on Cloud (real‑time as well as batch use cases).
  • Ability to interact with business analysts and functional analysts in getting the requirements and implementing the ELT solutions.
  • Proficiency and extensive experience with Spark & Python/ Scala and performance tuning.
  • Hands‑on experience building and implementing a data ingestion and curation process developed using Cloud data tools such as Cosmos DB, Data Factory, Spark (Scala/Python), Data bricks, Delta Lake, code versioning experience using Azure Dev Ops, etc.
  • Very good problem solver and excellent communication skills - both written and verbal.
Preferred
  • E…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary