×
Register Here to Apply for Jobs or Post Jobs. X

Software Engineer II-III

Remote / Online - Candidates ideally in
Albuquerque, Bernalillo County, New Mexico, 87101, USA
Listing for: Associated Universities, Inc.
Remote/Work from Home position
Listed on 2026-02-10
Job specializations:
  • Software Development
    Software Engineer, AI Engineer
Job Description & How to Apply Below

155 Observatory Rd, Green Bank, WV 24944, USA •

800 Bradbury Dr SE, Albuquerque, NM 87106, USA •

Job Description

Posted Friday, February 6, 2026 at 7:00 AM | Expires Wednesday, April 1, 2026 at 5:59 AM

Position

Description:

Position Summary

The National Radio Astronomy Observatory (NRAO) is an exciting and prestigious research facility that plays a vital role in the study of the universe. The Observatory operates a variety of radio telescopes that span the globe, including the famous Very Large Array (VLA) in New Mexico, the Green Bank Telescope in West Virginia, the Very Long Baseline Array (VLBA) across North America, and the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile.

These telescopes are among the most advanced in the world, allowing astronomers to explore the universe in unprecedented detail.

The Next Generation Very Large Array (ngVLA) is a transformative astronomical observatory designed to deliver science-ready data products to a broad community of users. The ngVLA is in the development phase of the project lifecycle. The computing resources needed to support data processing for ngVLA operations is significantly larger and more complicated than any existing NRAO facilities. Therefore, NRAO has partnered with the Texas Advanced Computing Center (TACC) to design and prototype the technical infrastructure and data processing software to support ngVLA operations.

At NRAO, we are recruiting an experienced Scientific Software Engineer to design, implement, optimize, and maintain scientific applications and data-processing software executed on large-scale high-performance computing (HPC) systems. This role will prototype, develop, benchmark and optimize the Radio Astronomy Data Processing Software (RADPS) in collaboration with TACC.

The role requires demonstrated proficiency in Python and C++, experience with parallel and distributed computing frameworks, and the ability to collaborate closely with domain scientists, systems engineers, and HPC support personnel. The successful candidate will contribute to the full software lifecycle -- from requirements analysis and algorithmic design through implementation, testing, optimization, and long-term maintainability -- within a performance-critical, research-driven environment.

This position ideally will be based either in Albuquerque, NM, or Socorro, NM but could also be based at our Charlottesville, VA or Green Bank, WV locations. For well qualified candidates, a remote work arrangement may be considered.

What You Will be Doing

The primary focus of this position will be prototyping, profiling and optimizing cutting edge software for RADPS within the Data Processing group. Immediate activities may include (but are not limited to):

Software Design and Development
  • Develop high-performance scientific software in C++ and Python, including numerical algorithms, data-analysis pipelines, and simulation components.
  • Implement scalable solutions leveraging modern parallel programming techniques (MPI, OpenMP, CUDA/HIP, OpenACC).
  • Build Python interfaces, bindings, and workflow tooling around high-performance C++ cores.
  • Design modular, maintainable, and testable codebases following established software engineering best practices.
Performance Engineering and Optimization
  • Profile, benchmark, and optimize HPC applications for multi-core, many-core, GPU-accelerated, and distributed-memory systems.
  • Improve algorithmic efficiency, memory usage, I/O patterns, and data-movement behavior to achieve target throughput and scalability.
  • Work with HPC system engineers to tune application performance for specific architectures (e.g., Slurm-managed clusters or other supercomputing platforms).
Scientific Workflow and Data Pipeline Development
  • Create robust, automated workflows for large-scale simulations, experiments, or data-processing tasks.
  • Integrate software with HPC schedulers, containerization technologies (e.g., Singularity/Apptainer), and workflow engines.
  • Implement data ingestion, transformation, and storage strategies for multi-terabyte to petabyte-scale datasets.
Collaboration and Technical Leadership
  • Collaborate with cross-disciplinary teams—scientists, data analysts,…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary