More jobs:
Job Description & How to Apply Below
We are seeking a Professional Data Engineer to join our dynamic team, where you will play
a crucial role in developing and maintaining robust data solutions.
As a Professional Data Engineer, you will collaborate with data science, business analytics,
and product development teams to deploy cutting-edge techniques and utilise best-in-class
third-party products. The Data team operates with engineering precision, prioritising security, privacy, and regulatory compliance in every initiative. As a Professional Data Engineer, you will contribute to the team's commitment to utilising the latest tools and methodologies, ensuring that our data solutions align with industry best practices.
- Languages:
SQL and Python - Pipeline orchestration tool:
Dagster (Legacy: Airflow) - Data stores:
Redshift, Clickhouse - PaaS: AWS (ECS/EKS, DMS, Kinesis, Glue, Athena, S3 and others.)
- ETL:
Five Tran & DBT for transformation - IaC:
Terraform (with Terragrunt) - GenAI:
Bedrock, Lang Chain, LLMs
Key Responsibilities:
- Develop and maintain ETL pipelines using SQL and/or Python.
- Use tools like Dagster/Airflow for pipeline orchestration
- Collaborate with cross-functional teams to understand and deliver data requirements.
- Ensure a consistent flow of high-quality data using stream, batch, and CDC processes.
- Use data transformation tools like DBT to prepare datasets to enable business users to self-service.
- Ensure data quality and consistency in all data stores.
- Monitor and troubleshoot data pipelines for performance and reliability.
- 3+ years of experience as a data engineer.
- Proficiency in SQL is a must
- Experience with modern cloud data warehousing, data lake solutions like Snowflake, Big Query, Redshift, Azure Synapse.
- Experience with ETL/ELT, batch, streaming data processing pipelines.
- Excellent ability to investigate and troubleshoot data issues, providing fixes and proposing both short and long-term solutions.
- Knowledge of AWS services (like S3, DMS, Glue, Athena, etc.)
- Familiar with DBT or other data transformation tools.
- Familiarity with GenAI, and how to leverage LLMs to resolve engineering challenges
Other Desired Experience:
- Experience with AWS services and concepts (like EC2, ECS, EKS, VPC, IAM, etc).
- Familiar with Terraform and Terragrunt.
- Experience with Python
- Experience with orchestration tools like Dagster, Airflow, AWS Step functions, etc.
- Experience with pub-sub, queuing, and streaming frameworks such as AWS Kinesis, Kafka, SQS, SNS.
- Familiar with CI/CD pipelines and automation
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×