Senior Software Engineer - AI Eval and Safety
Listed on 2026-02-14
-
Software Development
AI Engineer, Machine Learning/ ML Engineer
About the Job
Do you want to help shape the future of AI by building robust infrastructure and tools for developing trustworthy large language models and agentic workflows? We're seeking a software engineer who combines strong systems engineering skills with a passion for AI safety to develop frameworks that ensure AI systems behave reliably and align with human values.
The Open Shift AI team is looking for a Senior Software Engineer with Kubernetes and MLOps or LLMOps experience to join our rapidly growing engineering team. Our team’s focus is to make machine learning model deployment and monitoring seamless, scalable, and trustworthy across the hybrid cloud and the edge. This is a very exciting opportunity to build and impact the next generation of hybrid cloud MLOps platforms.
In this role, you ll be contributing as a technical infrastructure expert for responsible AI features of the open source Open Data Hub project by actively participating in KServe, Trusty
AI, Kubeflow, and several other open source communities. You will work as part of an evolving development team to rapidly design, secure, build, test and release model serving, trustworthy AI, and model registry capabilities. The role is primarily an individual contributor who will be a key notable contributor to trustworthy AI and MLOps/LLMOps upstream communities and collaborate closely with the internal cross-functional development teams.
you ll do
Lead the architecture and implementation of MLOps/LLMOps systems within Open Shift AI, establishing best practices for scalability, reliability, and maintainability while actively contributing to relevant open source communities
Design and develop robust, production-grade features focused on AI trustworthiness, including model monitoring, bias detection, and explainability frameworks that integrate seamlessly with Open Shift AI
Drive technical decision-making around system architecture, technology selection, and implementation strategies for key MLOps components, with a focus on open source technologies like KServe and TrustyAI
Define and implement technical standards for model deployment, monitoring, and validation pipelines, while mentoring team members on MLOps best practices and engineering excellence
Collaborate with product management to translate customer requirements into technical specifications, architect solutions that address scalability and performance challenges, and provide technical leadership in customer-facing discussions
Lead code reviews, architectural reviews, and technical documentation efforts to ensure high code quality and maintainable systems across distributed engineering teams
Identify and resolve complex technical challenges in production environments, particularly around model serving, scaling, and reliability in enterprise Kubernetes deployments
Partner with cross-functional teams to establish technical roadmaps, evaluate build-vs-buy decisions, and ensure alignment between engineering capabilities and product vision
Provide technical mentorship to team members, including code review feedback, architecture guidance, and career development support while fostering a culture of engineering excellence
Responsible for the safe, auditable, and reliable release of Kubernetes-native AI platform components, with strong emphasis on progressive delivery, operational resilience, and supply-chain integrity.
5+ years of software engineering experience, with at least 4 years focusing on ML/AI systems in production environments
Strong expertise in Python, with demonstrated experience building and deploying production ML systems
Deep understanding of Kubernetes and container orchestration, particularly in ML workload contexts
Extensive experience with MLOps tools and frameworks (e.g., KServe, Kubeflow, MLflow, or similar)
Track record of technical leadership in open source projects, including significant contributions and community engagement
Proven experience architecting and implementing large-scale distributed systems
Strong background in software engineering best practices, including CI/CD, testing, and monitoring
Experience mentoring engineers and driving technical decisions in a team environment
Experience with Red Hat Open Shift or similar enterprise Kubernetes platforms
Contributions to ML/AI open source projects, particularly in the MLOps space
Background in implementing ML model monitoring, explainability, or bias detection systems
Experience with LLM operations and deployment at scale
Public speaking experience at technical conferences
Advanced degree in Computer Science, Machine Learning, or related field
Experience working with distributed engineering teams across multiple time zones
Familiarity with AI governance and responsible AI practices
#LI-EK1
The salary range for this position is $ - $. Actual offer will be based on your qualifications.
Pay TransparencyRed Hat determines compensation based on several factors including but not limited to job…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).