Data Engineer/GoLang Developer
Listed on 2026-01-03
-
Software Development
Software Engineer, Data Engineer
Data Engineer
The mission of Bayer Crop Science is centered on developing agricultural solutions for a sustainable future that will include a global population projected to eclipse 9.6 billion by 2050. We approach agriculture holistically, looking across a broad range of solutions from using biotechnology and plant breeding to produce the best possible seeds, to advanced predictive and prescriptive analytics designed to select the best possible crop system for every acre.
To make this possible, Bayer collects terabytes of data across all aspects of its operations, from genome sequencing, crop field trials, manufacturing, supply chain, financial transactions and everything in between. There is an enormous need and potential here to do something that has never been done before. We need great people to help transform these complex scientific datasets into innovative software that is deployed across the pipeline, accelerating the pace and quality of all crop system development decisions to unbelievable levels.
you will do
- Be a critical senior member of a data engineering team focused on creating distributed analysis capabilities around a large variety of datasets.
- Take pride in software craftsmanship, apply a deep knowledge of algorithms and data structures to continuously improve and innovate.
- Work with other top-level talent solving a wide range of complex and unique challenges that have real world impact.
- Explore relevant technology stacks to find the best fit for each dataset.
- Pursue opportunities to present our work at relevant technical conferences:
- Google Cloud Next 2019:
- Graph Connect 2015:
- Google Cloud Blog:
- Project your talent into relevant projects. Strength of ideas trumps position on an org chart.
- At least 7 years experience in software engineering.
- At least 2 years experience with Go.
- Proven experience (2 years) building and maintaining data-intensive APIs using a RESTful approach.
- Experience with stream processing using Apache Kafka.
- A level of comfort with Unit Testing and Test Driven Development methodologies.
- Familiarity with creating and maintaining containerized application deployments with a platform like Docker.
- A proven ability to build and maintain cloud based infrastructure on a major cloud provider like AWS, Azure or Google Cloud Platform.
- Experience data modeling for large scale databases, either relational or No
SQL.
- Experience with protocol buffers and gRPC.
- Experience with Google Cloud Platform, Apache Beam and/or Google Cloud Dataflow, Google Kubernetes Engine or Kubernetes.
- Experience working with scientific datasets, or a background in the application of quantitative science to business problems.
- Bioinformatics experience, especially large scale storage and data mining of variant data, variant annotation, and genotype to phenotype correlation.
Location:
Creve Coeur, MO or Remote from another location.
Travel required:
No. Work location for this assignment:
Remote. Shift Start Time: AM - 08:00 AM. Driving required:
No.
Subcontractor Submittals Allowed? No. Is Filtered Assessment Required? No.
Benefits (employee contribution)- Health insurance
- Health savings account
- Dental insurance
- Vision insurance
- Flexible spending accounts
- Life insurance
- Retirement plan
All qualified applicants will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).