Senior Test Engineer
Listed on 2026-02-28
-
IT/Tech
Axelera AI is not your regular deep-tech startup. We are creating the next-generation AI platform to support anyone who wants to help advancing humanity and improve the world around us.
In just four years, we have raised a total of $120 million and have built a world-class team of 220+ employees (including 49+ PhDs with more than 40,000 citations), with offices in Belgium, France, Switzerland, Italy, the UK, and the Netherlands, headquartered at the High Tech Campus in Eindhoven. We have also launched our Metis™ AI Platform, which achieves a 3-5x increase in efficiency and performance, and a strong business pipeline exceeding $100 million.
Our unwavering commitment to innovation has firmly established us as a global industry pioneer. Are you up for the challenge?
As a Senior Test Engineer at Axelera AI, you will own end-to-end quality for our AI software stack running on the Metis inference accelerator. You will work directly with hardware and software engineers to validate functional correctness, inference performance, and model accuracy across multiple deployment platforms. This is a high-impact, hands-on engineering role — not a process management position. You will define testing strategy, build automation infrastructure, and be the quality signal that ships product confidently.
Responsibilities- Test Strategy & Ownership:
Define and own the end-to-end test strategy for the Axelera AI software stack, spanning functional, performance, accuracy, and regression testing. - Architect and maintain test frameworks that scale across multiple hardware platforms and software stack versions.
- Drive decisions on tooling adoption and testing infrastructure with limited supervision.
- Hardware-Software Co-Validation:
Design and execute test plans that validate software behaviour on the Metis AIPU, including driver interfaces, runtime correctness, and hardware-software integration points. - Develop tests that catch issues arising at the boundary between compiler output, runtime scheduling, and accelerator execution.
- Performance & Accuracy Testing:
Build and maintain benchmarks for latency, throughput, and power efficiency across supported models and hardware configurations. - Validate model accuracy after compilation and quantization, ensuring fidelity to reference implementations meets defined tolerances.
- Identify and investigate accuracy-vs-performance trade-offs and report findings clearly to development teams.
- Automation & CI/CD:
Implement and maintain automated test pipelines integrated into CI/CD workflows (e.g., Git Lab CI). - Write robust test scripts in Python and Shell; champion automated regression coverage as a first-class deliverable.
- Monitor test infrastructure health and drive improvements in test reliability and execution speed.
- Defect Management &
Collaboration:
Analyze, reproduce, and clearly document software defects; work closely with developers to prioritize and track resolution. - Participate in design reviews and architecture discussions to raise quality concerns early in the development cycle.
- Provide technical mentorship to junior SQA engineers and contribute to team processes and best practices.
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field.
- 5+ years of hands-on software quality assurance experience, with at least 2 years in a senior or lead capacity.
- Strong proficiency in Python for test automation; working knowledge of C/C++ and Shell scripting.
- Proven experience building and maintaining CI/CD-integrated test pipelines (Git Lab CI, Jenkins, or equivalent).
- Solid understanding of software testing methodologies: unit, integration, system, regression, performance, and accuracy testing.
- Strong debugging and root-cause analysis skills across complex, multi-layered software systems.
- Familiarity with Git-based development workflows and version control best practices.
- Excellent written and verbal communication skills; ability to translate technical findings into clear defect reports and stakeholder updates.
- Experience testing AI/ML systems: model compilation pipelines, quantization validation, or inference…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).