×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Software Engineer, Data Products

Job in Birmingham, West Midlands, B1, England, UK
Listing for: Yapily
Full Time position
Listed on 2026-02-16
Job specializations:
  • Software Development
    Data Engineer
Job Description & How to Apply Below

Software Engineer, Data Products

Join to apply for the Software Engineer, Data Products role at Yapily
.

Our Mission:
Redefining how the world interacts with value.

Our Vision: A world without financial friction.

Our

Purpose:

To empower everyone to access and move value.

At Yapily, we’re building a powerful, scalable, and secure open banking infrastructure that redefines how the world interacts with value. Our open banking platform powers leading companies, such as Adyen, Intuit Quick Books, and Google. By delivering payment initiation, bank data access, and pre‑built products, we enable businesses to innovate fast and push the boundaries of financial technology.

As an early pioneer of open banking, we’re actively shaping the future of this industry with unrivalled expertise and a relentless focus on innovation.

What we’re looking for

As a Java Software Engineer specializing in Data Products at Yapily, you will play a key role in designing and implementing our modern next‑generation data platform. Your responsibilities will involve creating high‑performance data pipelines, billing infrastructure, and self‑serve data infrastructure and APIs. Ultimately, you will develop data systems that enable engineering teams to derive more value from their data. This is an excellent opportunity to enhance your data engineering skills using the GCP stack.

Responsibilities

and Requirements
  • Developing and Optimising Data Pipelines:
    Designing, building, and maintaining scalable data ingestion and processing systems to transform raw data into actionable insights.
  • Designing and Maintaining Data Products:
    Developing and maintaining APIs that deliver a seamless data experience for internal and external stakeholders.
  • Managing Databases:
    Working with SQL and No

    SQL databases, optimising schema design, and troubleshooting queries to support high‑volume data transactions and improve database performance.
  • Managing Cloud Data Resources:
    Develop and maintain software products utilising GCP services such as Pub Sub, Big Query, Cloud Storage, and Dataflow.
  • Contributing to Billing Infrastructure:
    Building and maintaining a reliable billing architecture within an event‑driven environment.
  • Collaborating on Problem‑Solving:
    Partnering with Business Intelligence, infrastructure, product managers, and cross‑functional teams to deliver data‑centric solutions that drive business value.
  • Ensuring Quality Assurance:
    Implementing testing, monitoring, and logging practices to ensure the performance and resilience of data systems.
  • Driving Continuous Improvement:
    Participating in code reviews, iterative development, and agile methodologies to enhance product functionality and reliability.
  • Java Development: 3–5 years of hands‑on experience in Java development, particularly in data‑intensive environments and building data products.
  • Database Management:
    Background in managing both SQL and No

    SQL databases.
  • Version Control & CI/CD:
    Knowledge of version control (Git) and CI/CD practices for data pipeline deployment and exposure to tools such as Terraform.
  • Data Modelling & Schema Design:
    Familiarity with data modelling and schema design for operational or analytical systems.
  • API & Micro services Architecture:
    Comfortable working with REST APIs and micro services architectures.
  • Real‑time Stream Processing:
    Understanding of real‑time stream processing frameworks (e.g., Pub Sub, Kafka, Flink, Spark Streaming).
  • BI Tools & Visualisation Platforms:
    Experience supporting BI tools or visualization platforms (e.g. Looker, Grafana, Power

    BI etc.).
  • Data Pipelines & APIs:
    Experience in building and maintaining both batch and streaming data pipelines and APIs.
  • ETL/ELT Processes:
    Exposure to ETL/ELT processes in medium‑to‑large scale data environments (experience handling millions of records/events daily is a plus).
Preferred Skills
  • Python:
    Knowledge for data automation and scripting.
  • Containerization:
    Familiarity with tools like Docker and Kubernetes.
  • Workflow/Orchestration Tools:
    Familiarity with workflow/orchestration tools (e.g., Airflow, Dagster, Prefect).
  • Cloud-based Data Services:
    Exposure to cloud‑based data services (GCP preferred; AWS/Azure also considered).
  • Data Lineage &…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary