×
Register Here to Apply for Jobs or Post Jobs. X

Senior Scientific Software Engineer - HPC;

Job in Enders, Chase County, Nebraska, 69027, USA
Listing for: EMD
Part Time position
Listed on 2025-12-17
Job specializations:
  • IT/Tech
    AI Engineer, Cloud Computing
Job Description & How to Apply Below
Position: Senior Scientific Software Engineer - HPC (all genders, full-/part-time)
Location: Enders

Work Your Magic with us!

Ready to explore, break barriers, and discover more? We know you’ve got big plans – so do we! Our colleagues across the globe love innovating with science and technology to enrich people’s lives with our solutions in Healthcare, Life Science, and Electronics. Together, we dream big and are passionate about caring for our rich mix of people, customers, patients, and planet.

That's why we are always looking for curious minds that see themselves imagining the unimaginable with us.

Your Role:

Design, build, and operate the software and platforms that power our high‑performance and AI workloads. You will own core services, developer tooling, and workflow orchestration that enable researchers and scientific application engineers to deliver faster, more reliable results across on‑prem and cloud high‑performance compute infrastructure. This role complements our existing Scientific Applications Engineers by focusing on software engineering, Dev Ops, and platform capabilities to enhance how users interact with the HPC cluster.

Scientific Computing sits within the Data & AI Products organization of our Data & AI Organization. We lead oneHPC, the modernization of our company’s compute stack across Life Science, Healthcare, and Electronics. The team operates an integrated on‑prem + cloud platform used for simulation, data‑driven research, and advanced machine learning.

Key Responsibilities:
  • Own the self-service platform (FastAPI/Step Functions backend, React portal, and CLI workflows) to let researchers self‑onboard, manage projects, and leverage LDAP/Slurm/VAST integrations.
  • Implement Infrastructure as Code and configuration management for hybrid HPC + AWS environments.
  • Engineer container strategies for CPU/GPU workloads, including base images, CUDA/NCCL stacks, and reproducible builds.
  • Extend internal services, APIs, and SDKs (Python/Type Script) that provide standardized access to HPC schedulers, data stores, and AI/GPU resources.
  • Design and implement CI/CD pipelines, artifact/version management, and automated testing for scientific software and internal tools.
  • Model and operate data backends: SQL and No

    SQL, including schema design, and migrations.
  • Ensure transparency, monitoring, and performance insights for platform services and batch workloads (Prometheus/Grafana, structured logging, alerting, SLOs).
  • Partner with Scientific Applications Engineers and domain scientists to product ionize ML/AI and simulation workflows; provide code reviews and documentation.
  • Help set up training sessions, workshops, and onboarding material to assist users to effectively utilize the HPC and cloud resources.
Minimum qualifications
  • Degree in STEM, Software Engineering, or related field; equivalent practical experience accepted.
  • 3+ years building and operating production software/platforms; strong software engineering fundamentals and code quality practices.
  • Expert in Python; proficient in Type Script
  • Proven Dev Ops capability:
    Git‑based workflows, automated testing, CI/CD (Git Hub Actions), container registries, package publishing.
  • AWS proficiency: EC2/EKS/Batch, S3/EFS/FSx for Lustre, VPC/IAM, Cloud Watch; infrastructure as code with Terraform or Cloud Formation.
  • Proficient in working on Linux-based clusters.
  • Experience integrating with HPC schedulers (Slurm) and/or Kubernetes for batch/ML workloads.
Preferred qualifications
  • Experience with AI/ML infrastructure: GPU cluster operations, model training at scale (PyTorch/Tensor Flow), experiment tracking (MLflow), model serving and artifact storage.
  • Data movement at scale for science: object storage strategies, parallel file systems, data transfer tooling (e.g., Globus), checksum/lineage practices.
  • Exposure to scientific domains or packages (e.g., ORCA, VASP, GROMACS, LAMMPS, Alpha Fold, RF Diffusion, EDEM, STAR‑CCM+).
What we offer:

We are curious minds that come from a broad range of backgrounds, perspectives, and life experiences. We believe that this variety drives excellence and innovation, strengthening our ability to lead in science and technology. We are committed to creating access and opportunities for all to develop and grow at your own pace. Join us in building a culture of inclusion and belonging that impacts millions and empowers everyone to work their magic and champion human progress!

Apply now and become a part of a team that is dedicated to Sparking Discovery and Elevating Humanity!

#J-18808-Ljbffr
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary