AI Operations Platform Consultant
Listed on 2026-02-21
-
IT/Tech
Systems Engineer, AI Engineer, Data Engineer, Cloud Computing
Overview
- Brings extensive experience operating large-scale GPU-accelerated AI platforms, deploying and managing LLM inference systems on Kubernetes with strong expertise in Triton Inference Server and Tensor
RT-LLM. - Leads production-grade LLM pipelines with GPU-aware scheduling, load balancing, and real-time performance tuning across multi-node clusters. Designs containerized microservices, implements robust deployment workflows, and maintains operational reliability in mission-critical environments.
- Has led end-to-end LLMOps processes involving model versioning, engine builds, automated rollouts, and secure runtime controls.
- Develops comprehensive observability for inference systems using telemetry and custom dashboards to track GPU health, latency, throughput, and service availability.
- Applies advanced optimization techniques such as mixed precision, quantization, sharding, and batching to improve efficiency; brings a strong blend of platform engineering, AI infrastructure, and hands-on operational experience running high-performance LLM systems in production.
- AI Operations Platform Consultant
- Experience deploying, managing, operating, and troubleshooting containerized services at scale on Kubernetes for mission-critical applications (Open Shift)
- Experience deploying, configuring, and tuning LLMs using Tensor
RT-LLM and Triton Inference Server - Managing MLOps/LLMOps pipelines, using Tensor
RT-LLM and Triton Inference Server to deploy inference services in production - Setup and operation of AI inference service monitoring for performance and availability
- Experience deploying and troubleshooting LLM models on a containerized platform, monitoring, load balancing, etc.
- Experience with standard processes for operation of a mission critical system – incident management, change management, event management, etc.
- Managing scalable infrastructure for deploying and managing LLMs
- Deploying models in production environments, including containerization, microservices, and API design
- Triton Inference Server architecture, configuration, and deployment
- Model optimization techniques using Triton with TRTLLM
- Model optimization techniques including pruning, quantization, and knowledge distillation
CLOUD ANALYTICS TECHNOLOGIES LLC is an equal opportunity employer inclusive of female, minority, disability and veterans (M/F/D/V). Hiring, promotion, transfer, compensation, benefits, discipline, termination and all other employment decisions are made without regard to race, color, religion, sex, sexual orientation, gender identity, age, disability, national origin, citizenship/immigration status, veteran status or any other protected status. CLOUD ANALYTICS TECHNOLOGIES LLC will not make any posting or employment decision that does not comply with applicable laws relating to labor and employment, equal opportunity, employment eligibility requirements or related matters.
Nor will CLOUD ANALYTICS TECHNOLOGIES LLC require in a posting or otherwise U.S. citizenship or lawful permanent residency in the U.S. as a condition of employment except as necessary to comply with law, regulation, executive order, or federal, state, or local government contract.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).