×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Google Cloud Platform Data Engineer

Job in Bristol, Bristol County, BS1, England, UK
Listing for: PA Consulting
Full Time position
Listed on 2025-12-30
Job specializations:
  • Software Development
    Data Engineer
Job Description & How to Apply Below

Company Description

Bringing Ingenuity to Life

We’re an innovation and transformation consultancy that believes in the power of ingenuity to build a positive-human future in a technology-driven world. Our diverse teams of experts combine innovative thinking with breakthrough-technologies to progress further, faster.

With a global network of FTSE 100 and Fortune 500 clients, we’ll offer you unrivalled opportunities for growth and the freedom to excel. Combining strategies, technologies and innovation, we turn complexity to opportunity and deliver enduring results, enabling you to build a lasting career.

Isn’t it time you joined us?

Job Description

As a Principal GCP Data Engineer, you'll be a true subject matter expert in using the data processing and management capabilities of Google Cloud to develop data-driven solutions for our clients.

You will typically lead a team or the solution delivery effort, demonstrating technical excellence through leading by example. You could be providing technical support, leading an engineering team or working across multiple teams as a subject matter expert who is critical to the success of a large programme of work.

Your team members will look to you as a trusted expert and will expect you to define the end-to-end software development lifecycle in line with modern best practices. As part of your responsibilities, you will be expected to:

  • Develop robust data processing jobs using tools such as Google Cloud Dataflow, Dataproc and Big Query
  • Design and deliver automated data pipelines that use orchestration tools such as Cloud Composer
  • Design end-to-end solutions and contribute to architecture discussions beyond data processing
  • Own the development process for your team, building strong principles and putting robust methods and patterns in place across architecture, scope, code quality and deployments.
  • Shape team behaviour for writing specifications and acceptance criteria, estimating stories, sprint planning and documentation.
  • Actively define and evolve PA’s data engineering standards and practices, ensuring we maintain a shared, modern and robust approach.
  • Lead and influence technical discussions with client stakeholders to achieve the collective buy-in required to be successful
  • Coach and mentor team members, regardless of seniority, and work with them to build their expertise and understanding.
Qualifications

To be successful in this role, you will need to have:

  • Experience delivering and deploying production-ready data processing solutions using Big Query, Pub/Sub, Dataflow and Dataproc
  • Experience developing end-to-end solutions using batch and streaming frameworks such as Apache Spark and Apache Beam.
  • Expert understanding of when to use a range of data storage technologies including relational/non-relational, document, row-based/columnar data stores, data warehousing and data lakes.
  • Expert understanding of data pipeline patterns and approaches such as event-driven architectures, ETL/ELT, stream processing and data visualisation.
  • Experience working with business owners to translate business requirements into technical specifications and solution designs that satisfies the data requirements of the business.
  • Experience working with metadata management products such as Cloud Data Catalog and Collibra and Data Governance tools like Dataplex
  • Experience in developing solutions on GCP using cloud-native principles and patterns.
  • Experience building data quality alerting and data quarantine solutions to ensure downstream datasets can be trusted.
  • Experience implementing CI/CD pipelines using techniques including as git code control/branching, automated tests and automated deployments.
  • Comfortable working in an Agile team using Scrum or Kanban methodologies.

In addition to the above, we would be thrilled if you also had:

  • Experience of working on migrations of enterprise scale data platforms including Hadoop and traditional data warehouses
  • An understanding of machine learning model development lifecycle, feature engineering, training and testing
  • Good understanding or hands-on experience of Kafka
  • Experience as a DBA or developer on RDBMS such as Postgre

    SQL, MySQL, Oracle or SQL Server
  • Experience…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary