Senior Data Engineer - AI Personalization & Customer Data
Job Description & How to Apply Below
Overview
The AI Products and Platform (AIPP) team builds the foundational capabilities that enable AI-driven personalization, decisioning, and real-time customer experiences work supports intelligent interactions across digital and assisted channels, helping ensure customers receive timely, relevant, and responsible experiences.
Responsibilities- Build and operate real-time data pipelines that ingest customer events (payments, app behavior, IVR interactions) and power ML product decisions across channels.
- Develop event-driven architectures using GCP services (Pub/Sub, Dataflow, Cloud Run, Big Query) to support low-latency triggers and segmentation.
- Partner with Data Scientists and ML engineers to product ionize AIPP models (e.g., ML2), transforming model outputs into reliable, explainable data feeds.
- Translate business and marketing requirements into scalable data logic, including customer segments, eligibility rules, and offer mappings.
- Enable IVR and CCAI experiences by supporting data flows for intelligent routing, proactive messaging, and self-serve journeys.
- Support internal tools and future UI capabilities by providing clean datasets, schemas, and APIs for Software and Full Stack Engineers.
- Monitor, troubleshoot, and improve production pipelines, ensuring data quality, reliability, and observability during peak periods.
- Optimize for performance and cost, balancing streaming workloads, Big Query usage, and API integrations.
- Enable responsible AI practices through validation, lineage, and monitoring so personalization decisions remain transparent and trustworthy.
- Contribute to long-term architecture and standards, helping AIPP evolve into a steady-state personalization platform.
- 4+ years of experience in Data Engineering or backend data systems.
- Strong experience with Python and SQL in production environments.
- Hands‑on experience with GCP, especially Big Query and Pub/Sub.
- Experience designing and operating streaming or event‑driven pipelines.
- Understanding of data modeling, data quality, and pipeline reliability.
- Experience integrating data via APIs, including retries, rate limiting, and failure handling.
- Bachelor’s degree in Computer Science, Engineering, Data Systems, or a related technical field OR equivalent hands‑on industry experience building production‑grade data pipelines.
- Experience with Adobe Experience Platform or other CDPs.
- Exposure to AI/ML feature pipelines or real‑time model scoring.
- Experience with Dataflow (Apache Beam), dbt, or Airflow/Composer.
- Familiarity with personalization or decisioning platforms (PEGA‑like concepts).
- Telecom or high‑volume consumer data experience.
Location:
Calgary, Alberta, Canada
Salary: CA$–CA$
#J-18808-LjbffrPosition Requirements
10+ Years
work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×