ML Engineer; perception & state estimation
Listed on 2026-01-12
-
IT/Tech
AI Engineer, Machine Learning/ ML Engineer, Robotics
Department
Perception
Job Type
Full-time
Location
Cambridge
Modality
Hybrid
Start Date
No access
We are looking for a Machine Learning Engineer to join our Perception Team . You will build the core perception and reasoning engine for our flagship multi-agent system. This role is responsible for architecting the software that transforms raw, noisy sensor data into a rich, symbolic world model. This team will develop and implement the algorithms for managing perception inputs and maintaining a knowledge manager based on such inputs.
Whowe are About the Role
You'll form part of the Perception Team . This team unlocks the mastermind’s understanding and reasoning about its environment.
This is an on-site position, the successful candidate will be expected to work from the office at least 3 days a week.
What you’ll get to doMulti-Sensor Fusion:
Design and implement algorithms that manage the fusion of heterogeneous sensor streams (e.g., EO/IR, LiDAR, and neuromorphic cameras) into a single, coherent picture of the world.
Object Recognition:
Build and deploy models for real-time object detection, classification, and tracking, transforming raw data into structured, classified objects with unique IDs and states.
World Modeling:
Develop the Knowledge Manager , the central repository for abstract and symbolic world knowledge. You will be responsible for inferring the logical relationships between objects and agents.
Probabilistic State Estimation:
Implement and maintain the belief state over the environment, a core component of a knowledge manager.
Goal Inference:
Create the logic that translates high-level user commands into the formal, predicate-based goal states.
API
Collaboration:
Work closely with the Systems and Behaviour teams to define and refine APIs.
A strong theoretical foundation and practical experience in probabilistic machine learning (e.g., Bayesian inference, Gaussian Processes, state estimation filters like EKFs/UKFs).
Demonstrable experience with modern ML frameworks (PyTorch preferred) and computer vision libraries (OpenCV) applied to real-world sensor data.
Hands-on experience with sensor fusion techniques for combining data from sources like cameras and LiDAR.
Production-quality coding skills in both Python and C++.
What will set you apartProven experience developing and deploying software for real-world robotic systems (e.g., UAVs, UGVs)
Deep expertise in sensor fusion techniques, particularly with state estimation filters like EKF, for tracking and localization.
Hands-on experience with the Robot Operating System (ROS
2) and an understanding of the underlying DDS middleware and its QoS settings.
Practical experience in multi-agent reinforcement learning (MARL), planning under uncertainty, or collaborative robotics.
Familiarity with high-fidelity simulation environments for robotics, especially NVIDIA Isaac Lab.
Familiarity with the challenges of real-time systems, including managing latency, ensuring deterministic timing (e.g., PTP), and maintaining performance on degraded communication links.
Experience with knowledge representation, logical inference, or symbolic reasoning systems.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).