Software Engineer - Simulation Workbench
Listed on 2026-02-13
-
Software Development
Data Engineer, Software Engineer
About Us
Physics
X is a deep-tech company with roots in numerical physics and Formula One, dedicated to accelerating hardware innovation at the speed of software. We are building an AI-driven simulation software stack for engineering and manufacturing across advanced industries. By enabling high-fidelity, multi-physics simulation through AI inference across the entire engineering lifecycle, Physics
X unlocks new levels of optimization and automation in design, manufacturing, and operations — empowering engineers to push the boundaries of possibility. Our customers include leading innovators in Aerospace & Defense, Materials, Energy, Semiconductors, and Automotive.
Physics
X is developing a platform used by Data Scientists and Simulation Engineers to build, train, and deploy Deep Physics Models. The core of this platform relies on handling massive volumes of complex simulation data, enabling high-fidelity multi-physics simulation through AI inference.
We are looking for a Software Engineer with a strong background in Data Engineering to join our team. You will not just be moving data from A to B; you will be architecting and building the distributed systems, services, and APIs that form the backbone of our data strategy. You will bridge the gap between complex physical simulations and modern data infrastructure, implementing storage solutions for AI/ML pipelines and creating the analytical layers that allow our engineers to visualize and understand their results.
This is a role for a builder who loves coding robust software as much as they love designing efficient data architectures.
- Design and build scalable distributed systems, microservices, and APIs focused on storing, processing, and serving high-dimensional simulation data.
- Create robust, automated data and analytical pipelines that ingest, process, and transform multimodal data from physics simulations to feed our AI training loops and inference engines.
- Implement and integrate with modern Data Warehouses and Data Lakes (or Data Lake houses) to ensure our data is organized, accessible, and queryable at scale.
- Build internal BI systems and complex scientific data visualizations that allow researchers and engineers to interact intuitively with massive datasets and simulation results.
- Implement high-performance storage solutions capable of handling the unique demands of complex simulations and deep learning workloads.
- Drive best practices in software engineering across the team, including CI/CD, automated testing, and infrastructure-as-code, ensuring our data systems are as reliable as they are powerful.
- Own your work from architectural design and prototyping through to deployment and maintenance in a fast‑paced, agile environment.
- A passion for the evolving craft of software engineering and for sponsoring a culture of excellence in the craft.
- A strong foundation in software engineering (algorithms, data structures, system design) with a passion for writing clean, maintainable, and testable code (strong command of Golang and Python).
- Proven experience building distributed systems with big data processing pipelines in a production environment, moving beyond simple scripting to robust engineering solutions (e.g. Databricks/Delta Lake, Snowflake, Big Query), practical experience integrating with and architecting around Data Warehouses and Data Lakes.
- Experience building custom data visualizations or integrating complex BI systems to expose data insights to end-users.
- A proactive mindset with the ability to diagnose complex performance bottlenecks in data processing and storage systems.
- Excellent communication skills to discuss data needs with research scientists and translate them into technical specifications.
- Polyglot Programming Mastery: deep expertise in Python combined with mastery of high-performance compiled languages such as Golang, C++, or Rust.
- Big Data Scale:
Real‑world experience designing and maintaining big data systems, with a proven track record of running complex analytics on massive datasets in production. - Multimodal Data Exposure:
Experience working with multimodal databases or…
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: