Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets.
Our integrated ecosystem includes PaaS – Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios;
SaaS – Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C – Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage.
Opportunity
This position is for a Cloud Data engineer with a background in Python, DBT, SQL, and data warehousing for enterprise-level systems.
Responsibilities and Duties- Design, develop, and deploy Python scripts and ETL processes with Prefect and Airflow to prepare data for analysis.
- Model dimensional and denormalized schemas for optimal performance reporting and discovery.
- Design AI-friendly DB schemas and ontologies.
- Architect cloud ops solutions for data topologies.
- Transform and migrate data with Python, DBT, and Pandas.
- Work with event-based/streaming technologies for real-time ETL.
- Ingest and transform structured, semi-structured, and unstructured data.
- Optimize ETL jobs for performance and scalability to handle big data workloads.
- Monitor and troubleshoot ETL jobs to identify and resolve issues or bottlenecks.
- Implement best practices for data management, security, and governance with Prefect, DBT, and Pandas.
- Write SQL queries and program stored procedures and reverse engineer existing data pipelines.
- Perform code reviews to ensure fit to requirements, optimal execution patterns, and adherence to established standards.
- Assist with automated release management and CI/CD processes.
- Validate and cleanse data and handle error conditions gracefully.
- 3+ years of Python development experience, including Pandas
- 5+ years writing complex SQL queries with RDBMSes.
- 5+ years of Experience with developing and deploying ETL pipelines using Airflow, Prefect, or similar tools.
- Experience with cloud-based data warehouses in environments such as RDS, Redshift, or Snowflake.
- Experience with data warehouse design: OLTP, OLAP, Dimensions, and Facts.
- Experience with cloud-based data architectures, messaging, and analytics.
- Bachelor’s degree in Computer Science or equivalent - preferred
- Experience with
- Docker
- AWS lambdas/step functions
- Databricks
- Pyspark
- Cloud certifications
We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it.
The annual compensation range for this role is $80,000 - $95,000 CAD for a mid-level candidate. For a senior-level candidate, the range is $90,000 - $110,000 CAD.
#J-18808-LjbffrTo Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: