×
Register Here to Apply for Jobs or Post Jobs. X

Data Team Lead​/Data Architect; MongoDB, Redshift, Glue B2B SaaS

Job in Toronto, Ontario, C6A, Canada
Listing for: Confidential Company
Full Time position
Listed on 2025-12-17
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing, Big Data, Data Analyst
Salary/Wage Range or Industry Benchmark: 150000 - 200000 CAD Yearly CAD 150000.00 200000.00 YEAR
Job Description & How to Apply Below
Position: Data Team Lead / Data Architect (MongoDB, Redshift, Glue) B2B SaaS

Get AI-powered advice on this job and more exclusive features.

This range is provided by Confidential Company. Your actual pay will be based on your skills and experience — talk with your recruiter to learn more.

Base pay range

CA$/yr - CA$/yr

Fulltime position with paid Vacation & Holidays + Benefits Program

Must be CANADIAN Citizen or CAN Perm. Resident

100% Remote from within Canada (not Quebec). Must work on EST.

Small B2B SaaS cloud-based software company (less than 800 employees) that is experiencing high growth and is rolling out leading edge AI-enabled features. Need a Data Expert/Data Team Lead/Data Architect that can lead their migration from mono-to-micro. Mongo

DB over to Redshift…..using AWS Glue, PySpark, dbt, and Python.

This is NOT a typical SR Data Engineer role - This new hire must be the actual LEADER of the data migration! (Data Team Lead/Architect)

This Lead Data Engineer/Data Architect will be the highest-ranking Data Expert on the team and report to the VP of Technical Engineering. There is no one else.

This person must lead the way themselves and make suggestions on how to do it. This is NOT just be a Senior Engineer that follows orders and transition uses ETL to migrate or modernize the data.

  • Must be an expert in Database Management and Database Administration
    , especially MongoDB and Domain Data Modeling
    .

This seasoned Senior Data Engineer will help lead the modernization of our data infrastructure as we transition from a tightly coupled monolithic system to a scalable, microservices-based architecture. This role is central to decoupling legacy database structures, enabling domain-driven service ownership, and powering real-time analytics, operational intelligence, and AI initiatives across our platform.

Key Responsibilities
  • Monolith-to-Microservices Data Transition:
    Lead the decomposition of monolithic database structures into domain-aligned schemas that enable service independence and ownership.
  • Pipeline Development & Migration:
    Build and optimize ETL/ELT workflows using Python, PySpark/Spark, AWS Glue
    , and dbt
    , including schema/data mapping and transformation from on-prem and cloud legacy systems into data lake and warehouse environments.
  • Domain Data Modeling:
    Define logical and physical domain-driven data models (star/snowflake schemas, data marts) to serve cross-functional needs, BI, operations, streaming, and ML.
  • Legacy Systems Integration:
    Design strategies for extracting, validating, and restructuring data from legacy systems with embedded logic and incomplete normalization.
  • Database Management
    :
    Administer, optimize, and scale SQL (MySQL, Aurora,
    Redshift
    ) and No

    SQL (
    MongoDB
    ) platforms to meet high-availability and low-latency needs.
  • Cloud & Serverless ETL:
    Leverage AWS Glue Catalog, Crawlers, Lambda
    , and S3 to manage and orchestrate modern, cost-efficient data pipelines.
  • Monitoring & Optimization:
    Implement observability (Cloud Watch, logs, metrics) and performance tuning across Spark, Glue, and Redshift workloads.
Minimum Requirements
  • 10+ years in data engineering with a proven record in modernizing legacy data systems and driving large-scale migration from monolithic over to microservices (preparing modernized data to be ready for use in AI modules).
  • Must have experience as an actual Data Lead/Data Architect leading the way. NOT just a Senior Data Engineer following orders. Must be a LEADER. Must be able to give direction and make recommendations and give push back when needed.
  • Must be an expert in MongoDB and Redshift
    , and be an ace using AWS Glue
    , Py Spark ,
    dbt
    , and Python
    .
  • Must be “
    very hands-on
    ” engineer. This is an individual contributor role. There are no other data engineers. You may be able to have some offshore assistance, but it will mostly be up to this person to handle the migration.
  • Must have great English communication skills and work well in a team environment. Position will work closely with solution architects and domain owners to design resilient pipelines and data models that reflect business context and support scalable, secure, and auditable data access for internal and external consumers.
  • Must be Canadian Citizen or Canadian Permanent Resident
    . We can NOT HIRE anyone in Quebec. Must live in Canada.
  • Must be comfortable working for a small company with less than 800 employees.
Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • IT System Custom Software Development

Referrals increase your chances of interviewing at Confidential Company by 2x

#J-18808-Ljbffr
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary