More jobs:
Senior Performance Engineer - AI Platforms
Job in
Boston, Suffolk County, Massachusetts, 02298, USA
Listed on 2026-02-17
Listing for:
Red Hat Inc.
Full Time
position Listed on 2026-02-17
Job specializations:
-
IT/Tech
AI Engineer, Systems Engineer
Job Description & How to Apply Below
The Red Hat Performance and Scale Engineering team is seeking a Senior Performance Engineer to join our PSAP (Performance and Scale for AI Platforms) team. In this role, you will drive the performance and scalability of distributed inference for Large Language Models (LLMs) as part of the Red Hat AI Inference Server (RHAIIS) open-source project. You will be responsible for characterizing, modeling, and understanding performance deltas to ensure industry-leading throughput, latency, and cost-efficiency of AI workloads.
This includes using tools like vLLM, Guide
LLM, and PyTorch for example.
This is a dynamic role for a seasoned engineer with a growth mindset who handles and adapts to rapid change, has a strong commitment to open-source values, and the willingness to learn and apply new technologies. You will be joining a vibrant open source culture and helping promote performance and innovation in this Red Hat engineering team.
The broader mission of the Performance and Scale team is to establish performance and scale leadership of the Red Hat product and cloud services portfolio. The scope includes component level, system and solution analysis and targeted enhancements. The team collaborates with engineering, product management, product marketing and customer support as well as Red Hat's hardware and software ecosystem partners.
At Red Hat, our commitment to open source innovation extends beyond our products - it's embedded in how we work and grow. Red Hatters embrace change - especially in our fast-moving technological landscape - and have a strong growth mindset. That's why we encourage our teams to proactively, thoughtfully, and ethically use AI to simplify their workflows, cut complexity, and boost efficiency.
This empowers our associates to focus on higher-impact work, creating smart, more innovative solutions that solve our customers' most pressing challenges.
What you'll do
* Define and track key performance indicators (KPIs) and service level objectives (SLOs) for large-scale, LLM inference services
* Formulate and execute performance benchmarks utilizing tools like vLLM, Guide
LLM, and PyTorch Profiler and other related tools to characterize performance, drive improvements, and detect issues through data analysis and visualization.
* Develop and maintain tools, scripts, and automated solutions that streamline performance benchmarking and AI model profiling tasks.
* Collaborate closely with cross-functional engineering teams to identify and address critical performance bottlenecks within the architecture and inference stacks.
* Partner with Dev Ops to bake performance gates into Git Hub Actions/RHAIIS Pipelines.
* Explore and experiment with emerging AI technologies relevant to software development, proactively identifying opportunities to incorporate new AI capabilities into existing workflows and tooling.
* Triage field and customer escalations related to performance; distill findings into upstream issues and product backlog items.
* Publish results, recommendations, and best practices through internal reports, presentations, external blogs, technical papers, and official documentation.
* Represent the team at internal and external conferences, presenting key findings and strategies.
What you'll have
* 5+ years of experience in performance engineering or systems-level software design.
* Hands-on experience with operating systems, distributed systems, or system-level performance tooling.
* Understanding of AI and LLM fundamentals.
* Fluency in Python (data & ML) and strong Bash/Linux skills.
* Knowledge of performance benchmarking and profiling for LLMs.
* Exceptional communication skills-able to translate raw performance data into customer value and executive narratives.
* Commitment to open-source values.
The following is considered a plus
* Master's or PhD in Computer Science, AI, or a related field.
* History of upstream contributions and community leadership.
* Experience publishing blogs or technical papers.
* Hands-on experience with any of the following Kubernetes/Open Shift/RHAIIS/RHELAI
* Familiarity with performance observability stacks such as perf/eBPF tools, Nsight Systems, PyTorch…
Position Requirements
10+ Years
work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×