×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer

Job in Waterloo, Kitchener, Ontario, Canada
Listing for: Manulife
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst
Job Description & How to Apply Below
Location: Waterloo

Manulife is embarking on replacing and building new data capabilities to help fuel our bold ambition to become a digital customer leader! We are seeking a skilled and motivated data engineer to join our diverse team and play a key role in implementing, optimizing, and maintaining assets that deliver these capabilities! The ideal candidate possesses a solid background in data engineering, ETL processes, and data integration, with a passion for understanding data to drive strategic business decisions.

Position Responsibilities:

  • Data Pipeline Development: Design, develop, and manage data pipelines that facilitate the detailed extraction, transformation, and loading of data from diverse sources.
  • Data Mapping & Integration: Collaborate closely with multi-functional teams to understand and design schemas for data from various source systems and other transactional or application databases, ensuring accuracy and reliability.
  • ETL Optimization: Continuously improve and optimize ETL processes to enhance data flow efficiency, minimize latency, and support real-time and batch processing requirements.
  • Data Transformation: Implement data cleansing, enrichment, and transformation processes to ensure high-quality data is available for analysis and reporting.
  • Data Quality Assurance: Design testing plans, develop and implement data quality checks, validation rules, and supervising mechanisms to maintain data accuracy and integrity.
  • Platform Improvement: Collaborate with various technical resources from across the organization to identify and implement improvements to the infrastructure, integrations, and functionalities.
  • Data Architecture: Work closely with business leads and data architects to design, implement, and manage end-to-end architecture based on business requirements.
  • User Documentation: Develop and maintain detailed documentation for data pipelines, processes, and configurations to support seamless onboarding.
  • Teamwork: Partner with other data engineers, data analysts, business collaborators, and data scientists to understand data requirements and translate them into effective data engineering solutions.
  • Performance Monitoring: Monitor data pipeline performance and solve issues to ensure efficient data flow and proactively find opportunities for improvement.
  • Data Governance and Compliance: Ensure consistency with data privacy and compliance standards throughout the data lifecycle.
  • Required Qualifications:

  • Bachelor’s Degree in Computer Science, Information Technology, or a related field. Master's degree is a plus.
  • 5+ years of experience as a Data Engineer, with a track record of efficiently implementing and maintaining data pipelines, scheduling, monitoring, notification, and ETL processes using Azure Data Factory, Databricks, Python yspark, Java, Scala.
  • Understanding of Azure infrastructure; subscriptions, resource groups, resources, access control with RBAC (role-based access control), integrations with Azure AD and Azure security principles (user group, service principal, managed identity), network concepts (VNet, Subnet, NSG rules, private endpoints), password redential ey management and data protection.
  • Experience deploying and integrating Azure Data Services (ADF, Databricks) using Dev Ops tools and principles:
    Git Hub Repository, Jenkins CI D pipelines, integrated unit tests, etc.
  • Knowledge of Azure Data Lake Storage (ADLS Gen2) and its topology on blob storage with file hierarchy, storage account, containers, and folders.
  • Proficient in data mart fact and dimension design concepts, ETL LT logic to perform upsert and type-2, and applying mplementing with python yspark tore proc QL in ADLS (with Lakehouse architecture using Databricks) or in Azure Synapse (with Dedicated SQL pool),
  • Preferred Qualifications:

  • Experience using Power BI to connect to sources with ADLS (including Delta Lake), Databricks SQL, Azure Synapse (SQL pool), design semantic layer ata models, create ublish reporting content (data sets aginate report nteractive dashboard), and manage workspaces.
  • Solid understanding of data privacy and compliance regulations and standard methodologies.
  • Excellent problem-solving skills and the ability to solve…
  • Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
    To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
     
     
     
    Search for further Jobs Here:
    (Try combinations for better Results! Or enter less keywords for broader Results)
    Location
    Increase/decrease your Search Radius (miles)

    Job Posting Language
    Employment Category
    Education (minimum level)
    Filters
    Education Level
    Experience Level (years)
    Posted in last:
    Salary