More jobs:
Job Description & How to Apply Below
The ideal candidate has hands‑on experience with Databricks, Power
BI, advanced SQL, Fivetran, data integration from on‑prem and cloud systems (including Workday and Salesforce), and solid understanding of Python and Databricks Notebooks.
Key Responsibilities Design, develop, and maintain data pipelines and workflows for ingestion, transformation, and delivery of clean, reliable business data for analysis and reporting.
Collaborate with business teams to gather requirements, perform data validation, and support UAT/demos.
Extract, integrate, and transform data from diverse systems including Workday, Salesforce, on‑prem and SaaS applications using APIs, JDBC/ODBC, and native/direct connections.
Write and optimize advanced SQL for data modeling, transformation, and cost‑efficient query execution.
Build and optimize Power BI datasets, models, and dashboards for business insights and performance tracking.
Use Databricks Notebooks with Python and/or Scala for data preparation, automation, and analysis.
Monitor and optimize compute resources and job performance for cost control and efficiency.
Document data pipelines, transformation logic, and architecture for transparency and maintainability.
Education and Experience 2–5 years in a Data Engineering or Business Data Analysis role.
Strong hands‑on experience with Databricks (including Delta Lake, Spark SQL, and Notebooks).
Strong working knowledge of Power BI (data modeling, DAX, dashboard design, publishing).
Advanced SQL skills for large‑scale data transformation and optimization.
Proficiency in Python and/or Scala for data processing in Databricks.
Proven experience with Fivetran or similar ETL/ELT tools for automated data ingestion.
Experience integrating data from Business Applications like Workday and Salesforce (via APIs, reports, or connectors).
Ability to manage and transform data from on‑premises and cloud systems.
Strong communication skills with experience in business requirement gathering and data storytelling.
Bachelor’s degree in Computer Science, Data Engineering, Information Systems, Statistics, or a related field.
Relevant certifications (e.g., Databricks Certified Data Engineer, Microsoft Power BI Data Analyst, Workday Reporting Specialist) are a plus.
Preferred / Nice‑to‑Have Fundamental knowledge of Apache Spark (architecture, RDDs, Data Frames, optimization).
Experience in query and compute cost optimization within Databricks or similar platforms.
Familiarity with data governance, security, and metadata management.
Exposure to CI/CD for data pipelines using Git or Dev Ops tools.
GenAI Agents and/or ML experience
Decision Making and Supervision Work under minimal supervision.
Make decisions and recommendations requiring analysis and interpretation within established procedures.
Working Conditions Generally comfortable working conditions with lifting and onsite installations.
Moderate visual concentration in use of video display terminal.
The successful candidate must be able to work in Canada and obtain clearance under the Canadian Controlled Goods program (CGP).
#LI-MF1
#J-18808-Ljbffr
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×