×
Register Here to Apply for Jobs or Post Jobs. X

Middle Data Engineer; Data, ML, BI & Automation — BigQuery & Looker

Job in Poland, Androscoggin County, Maine, 04274, USA
Listing for: Kyriba
Full Time position
Listed on 2025-12-08
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst
Job Description & How to Apply Below
Position: Middle Data Engineer (Data, ML, BI & Automation) — BigQuery & Looker
Location: Poland

It's fun to work in a company where people truly BELIEVE in what they're doing!

We're committed to bringing passion and customer focus to the business.

About Us

Kyriba is a global leader in liquidity performance that empowers CFOs, Treasurers and IT leaders to connect, protect, forecast and optimize their liquidity. As a secure and scalable SaaS solution, Kyriba brings intelligence and financial automation that enables companies and banks of all sizes to improve their financial performance and increase operational efficiency. Kyriba’s real-time data and AI-empowered tools empower its 3,000 customers worldwide to quantify exposures, project cash and liquidity, and take action to protect balance sheets, income statements and cash flows.

Kyriba manages more than 3.5 billion bank transactions and $15 trillion in payments annually and gives customers complete visibility and actionability, so they can optimize and fully harness liquidity across the enterprise and outperform their business strategy. For more information, visit

We are seeking a versatile and innovative Data Engineer to design, build, and maintain scalable data pipelines and infrastructure that support analytics, reporting, Machine Learning (ML), Generative AI (GenAI), Business Intelligence (BI), and automation initiatives. The ideal candidate will have practical experience with Google Cloud, Big Query, and modern data processing, with a keen interest in enabling advanced analytics and automation across the organization.

Key Responsibilities Data Engineering
  • Design, implement, and optimize robust ELT/ETL pipelines using Google Big Query, Cloud Storage, and GCP services (e.g., Dataflow, Pub/Sub, Cloud Composer) to support analytics, ML, BI, and automation use cases.
  • Build and maintain data architectures for structured and unstructured data, ensuring data quality, lineage, and security.
  • Integrate data from multiple sources, including external APIs and on-premise systems, to create a unified, well-modeled data environment.
  • Apply Big Query best practices including partitioning, clustering, materialized views, and cost/performance optimization.
Machine Learning & GenAI
  • Collaborate with Data Scientists and ML Engineers to deliver datasets and features for model training, validation, and inference.
  • Develop and operationalize ML/GenAI pipelines, automating data preprocessing, feature engineering, model deployment, and monitoring using Vertex AI and/or Big Query ML.
  • Support the deployment and maintenance of GenAI models and LLMs in production environments, including prompt/feature pipelines and inference orchestration.
  • Stay current on emerging ML and GenAI technologies and best practices across the GCP ecosystem.
Business Intelligence & Reporting
  • Partner with BI Developers and Analysts to provide clean, reliable, governed data sources for reporting and dashboarding in Looker (semantic modeling in LookML).
  • Enable data access and transformation for self-service BI; ensure BI solutions are scalable, secure, and performant.
  • Integrate advanced analytics and ML/GenAI outputs into BI datasets, Looks, and Explores for actionable insights.
Automation
  • Partner with Automation Specialists to design and implement data-driven automated workflows using Mule Soft and/or GCP services (e.g., Cloud Functions, Workflows, Cloud Run).
  • Develop and maintain automation scripts and integrations to streamline data flows, improve operational efficiency, and reduce manual effort.
Governance & Collaboration
  • Implement data governance, security, and compliance best practices across all data assets, leveraging tools such as Dataplex and Data Catalog for lineage and metadata.
  • Document data flows, pipelines, and architectures for technical and business stakeholders.
  • Collaborate across teams (data science, BI, business, IT) to align data engineering efforts with strategic objectives and SLAs.
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field.
  • Proven experience as a Data Engineer or similar role.
  • Expertise with Google Big Query and Google Cloud Storage; solid knowledge of GCP data and streaming services (Dataflow/Apache Beam, Pub/Sub,…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary