×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer

Job in Penarth, Vale of Glamorgan, CF64, Wales, UK
Listing for: BLAZECORP PTE. LTD.
Full Time position
Listed on 2026-02-07
Job specializations:
  • IT/Tech
    Data Engineer, Big Data, Cloud Computing
Job Description & How to Apply Below

Job Overview

As Data Engineer, you will support Data Engineering team in setting up the Data Lake on Cloud and the implementation of standardized Data Model, single view of customer.

You will develop data pipelines for new sources, data transformations within the Data Lake, implementing

GRAPHQL, work on NO SQL Database, CI/CD and data delivery as per the business requirements.

Responsibilities
  • Buildpipelines to bring in wide variety of data from multiple sources withinthe organization as well as from social media and public data sources.
  • Collaborate with cross functional teams to source data and make it available fordownstream consumption.
  • Workwith the team to provide an effective solution design to meet business needs.
  • Ensureregular communication with key stakeholders, understand any key concernsin how the initiative is being delivered or any risks/issues that have either not yet been identified or are not being progressed.
  • Ensuredependencies and challenges (risks) are escalated and managed. Escalatecritical issues to the Sponsor and/or Head of Data Engineering team.
  • Ensuretimelines (milestones, decisions and delivery) are managed and achieved,without compromising quality and within budget.
  • Ensurean appropriate and coordinated communications plan is in place forinitiative execution and delivery, both internal and external.
  • Ensurefinal handover of initiative to business-as-usual processes, carry out apost implementation review (as necessary) to ensure initiative objectiveshave been delivered, and any lessons learnt are included in futureprocesses.
Qualifications

Who we are looking for:

Competencies & Personal Traits

  • Expertise in Databricks
  • Experience with at least one Cloud Infra provider (Azure/AWS)
  • Experience in building data pipelines using batch processing with Apache Spark (Spark

    SQL, Dataframe API) or Hive query language (HQL)
  • Experience in building streaming data pipeline using Apache Spark Structured Streaming or Apache Flink on Kafka & Data Lake
  • Knowledge of NOSQL databases.
  • Expertise in Cosmos DB, Restful APIs and GraphQL
  • Knowledge of Big data ETL processing tools, Data modelling and Data mapping.
  • Experience with
    Hive and Hadoop file formats (Avro / Parquet / ORC)
  • Basic knowledge of scripting (shell / bash)
  • Experience of working with multiple data sources including relational databases (SQLServer / Oracle / DB2 / Netezza), No

    SQL / document databases, flat files
  • Experience with CI CD tools such as Jenkins, JIRA, Bitbucket, Artifactory, Bamboo and Azure Dev-ops.
  • Basic understanding of Dev Ops practices using Git version control
  • Abilityto debug, fine tune and optimize large scale data processing jobs
  • Excellentproblem analysis skills
Experience
  • 5+years (no upper limit) of experience working with Enterprise ITapplications in cloud platform and big data environments.
Professional Qualifications

Certifications related to Dataand Analytics would be an added advantage

#J-18808-Ljbffr
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary