Data Engineering Specialist - Databricks
Are you an experienced Data Engineering professional interested in working in a talented and collaborative team developing Cloud, Data and AI solutions for leading global clients? Do you regularly find yourself in conversations on disparate and dispersed data sources and feel the need to create a robust data engineering infrastructure to sort that mess out? If you have that technical expertise of building pipelines and enabling data engineering or ETL in organizations, a genuine flair for solving problems, and the mindset to collaborate with diverse clients and team members alike, we're looking for you.
Expertise in Databricks and Spark/Pyspark is required for this role
This is a hands-on role
Your future duties and responsibilitiesAs a Data Engineer (Databricks) on this team your responsibilities will include:
- Delivering on areas of data preparation and transformations and ETL or ELT development on Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake (ADLS), Synapse, Azure SQL database, Azure SQL Data warehouse or AWS - AWS Glue, Amazon S3, Amazon Redshift, Amazon Athena, and Amazon RDS
- Building Batch and Streaming data pipelines with Databricks
- Working in a business environment with large-scale, complex and big data datasets and dispersed data sources
- Using SQL or Advanced SQL and Python skills as necessary
- Gathering client requirements, coding and building Data Engineering pipelines in Azure
- Implementing proof of concepts to show value and then package and scale to full data engineering scale on both on-prem and cloud environments
- Strategically collaborating with the clients to explore data sources and build Power BI dashboards with Azure/AWS Data Engineering
Supporting and collaborating with the other Data Engineers or Data Scientists in the team with technical knowledge
Required Qualifications To Be Successful In This Role- Azure/AWS + Databricks, Spark/Pyspark experience is mandatory
- Minimum 5+ years' experience in AWS/Azure Data Engineering is required to apply to this role
- Experience with building streaming pipelines on Azure/AWS
- Data analysis, storage, data pipelines, and orchestration
- Expert Python and SQL skills
- Experience working on Big Data
- Experience in data engineering with ML projects
- Experience with data pipeline and workflow management tools eg. Airflow, Jenkins etc.
- Experience working with Spark/ Hive/HDFS
- Experience in building strong Power BI Dashboards with DAX queries, Power Apps, Power Automate, and Logic Apps
CGI is providing a reasonable estimate of the pay range for this role. The determination of this range includes factors such as skill set level, geographic market, experience and training, and licenses and certifications. Compensation decisions depend on the facts and circumstances of each case. A reasonable estimate of the current range is $95,000-$145,000. This role is an existing vacancy.
#J-18808-LjbffrTo Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: