×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer, Research

Job in 3500, Utrecht, Utrecht, Netherlands
Listing for: Genmab A/S
Full Time position
Listed on 2026-01-30
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing, Data Science Manager, Data Analyst
Salary/Wage Range or Industry Benchmark: 80000 - 100000 EUR Yearly EUR 80000.00 100000.00 YEAR
Job Description & How to Apply Below

At Genmab, we are dedicated to building extraordinary® futures, together, by developing antibody products and groundbreaking, knock-your-socks-off KYSO antibody medicines® that change lives and the future of cancer treatment and serious diseases. We strive to create, champion and maintain a global workplace where individuals’ unique contributions are valued and drive innovative solutions to meet the needs of our patients, care partners, families and employees.

Our people are compassionate, candid, and purposeful, and our business is innovative and rooted in science. We believe that being proudly authentic and determined to be our best is essential to fulfilling our purpose. Yes, our work is incredibly serious and impactful, but we have big ambitions, bring a ton of care to pursuing them, and have a lot of fun while doing so.

Does this inspire you and feel like a fit? Then we would love to have you join us!

The Role

We are seeking technically gifted Data Engineer to join our Research Data Engineering team. This role will primarily support discovery-focused research by strengthening our Cloud, Dev Ops, and Data Platform capabilities. You will work closely with computational scientists, IT partners, and Dev Ops/Platform engineers to build and operate scalable data infrastructure that enables high-impact biological analyses.

Research at Genmab spans target discovery through early development and is central to our mission to transform the future of cancer treatment. Within this context, the team provides data platforms and engineering solutions across the research landscape. This role will focus on discovery-oriented workflows, helping ensure that research teams can efficiently access, process, and analyze complex biological data in a secure and well-governed environment.

Responsibilities
  • Implement and operate cloud-native data pipelines and platforms supporting discovery research and bioinformatics analytics

  • Support Dev Ops and infrastructure-as-code initiatives (e.g. Terraform) under guidance from senior engineers

  • Develop, deploy, and maintain data workflows using cloud-native services (e.g. AWS, Databricks)

  • Automate routine data engineering and operational tasks to improve reliability, scalability, and efficiency

  • Collaborate with data scientists to translate discovery research needs into robust data and platform solutions

  • Assist with monitoring, troubleshooting and performance optimization of research data pipelines

  • Ensure data workflows align with established security, compliance and data governance standards appropriate for research use

  • Contribute to technical documentation and knowledge sharing across the team

Requirements
  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, Bioinformatics, or a related field.

  • 5+ years of experience in data engineering, Dev Ops, cloud engineering, or a related technical role.

  • Experience with Python & SQL, or strong experience in a related language (e.g. R, C++, Java/Scala) with the ability and willingness to work primarily in Python & SQL.

  • Hands-on experience or strong exposure to cloud platforms and its native services, preferably AWS.

  • Hands on experience with modern data platforms and Dev Ops frameworks & concepts, such as:
    - Infrastructure-as-code (e.g. Terraform) , CI/CD etc.
    - Analytical platforms (e.g. DBT, Databricks, Spark)

  • Familiarity with Data automation, Data observability, and Data monitoring concepts.

  • Exposure to scientific or bioinformatics data workflows is a plus, but not required.

  • Interest in working within a regulated and compliance aware environment.

  • Experience with real-time data pipelines (Kafka, Kinesis, Delta Live Tables).

  • Familiarity with containerization and orchestration (Docker, Kubernetes, EKS).

  • Familiarity with Agile delivery lifecycle including managing backlog and prioritization in JIRA.

  • Hybrid availability: 3 days onsite in our Utrecht office, 2 days remote.

About You
  • You are genuinely passionate about our purpose

  • You bring precision and excellence to all that you do

  • You believe in our rooted-in-science approach to problem solving

  • You are a generous collaborator who can work in teams with a broad spectrum of backgrounds

  • You take pride in…

Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary