Data Engineer
Listed on 2026-02-15
-
Engineering
Data Engineer
Onebridge, a Marlabs Company, is a global AI and Data Analytics Consulting Firm that empowers organizations worldwide to drive better outcomes through data and technology. Since 2005, we have partnered with some of the largest healthcare, life sciences, financial services, and government entities across the globe. We have an exciting opportunity for a highly skilled Data Engineer to join our innovative and dynamic team.
DataEngineer | About You
As a Data Engineer, you are responsible for designing and delivering scalable, production‑grade data solutions that power scientific analysis and decision‑making. You thrive in complex data environments and enjoy building pipelines, integrating heterogeneous sources, and optimizing systems for performance and reliability. You are comfortable working with modern cloud‑native architectures and distributed query engines to support large‑scale datasets. You communicate effectively across technical and non‑technical teams, ensuring data is accurate, accessible, and well‑governed.
You take pride in building infrastructure that enables researchers and analysts to work faster and smarter.
- Design, build, and optimize data pipelines and ETL processes to integrate scientific data from numerous heterogeneous sources.
- Develop and maintain Lakehouse architectures on AWS (S3, Glue, Athena) supporting high‑volume, multibillion‑record datasets.
- Build federated query capabilities using distributed engines such as Trino to enable unified access across diverse platforms.
- Implement data harmonization solutions to standardize compound, assay, and experimental data across multiple scientific modalities.
- Optimize performance for Postgre
SQL, Iceberg, and other analytical databases using tuning, caching, and query optimization techniques. - Implement data quality checks, validation frameworks, and governance practices to ensure accurate, compliant, and well‑documented datasets.
- 5+ years of experience in data engineering, data warehousing, or related roles with a proven track record of production‑grade data pipeline development.
- Strong proficiency in Python and SQL, including experience with libraries such as pandas or PySpark for data manipulation.
- Deep experience working with relational databases (e.g., Postgre
SQL, Oracle) and modern cloud data warehouses (e.g., Snowflake, Redshift). - Hands‑on experience with AWS services including S3, Glue, Athena, Lambda, and RDS, supporting scalable data platforms.
- Strong knowledge of distributed processing tools and query engines such as Spark, Trino, or Presto.
- Proficiency in ETL/ELT development, version control with Git, and experience with visualization tools such as Power BI or Spotfire.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).