AI System Administrator
Listed on 2026-02-16
-
IT/Tech
IT Support, Cloud Computing
Overview
Share with friends or connect with us on Linked In!
We’re looking for an AI Systems Administrator who is excited about supporting smart technologies that help teams work more efficiently. Step into a role where innovation, performance, and future-focused tools come together.
Who is Meissner?
The goal at Meissner is to be more than simply good; it is to be extraordinary. Extraordinary performance comes from extraordinary people.
Meissner as a group is passionate about helping our clients manufacture lifesaving and life enhancing drugs, therapies, and vaccines. We develop, manufacture, supply and service advanced microfiltration products and single-use systems worldwide.
We know that when you are passionate about what you do, it’s more than just a job.
Meissner is focused on the development of the whole individual, and we have programs and tools in place to help us stay at our best mentally and physically. In alignment with our commitment to support the growth and development of the whole individual, Meissner has inaugurated a Learning and Development department to solely focus on cultivating our team. When you grow, we grow.
How you will make an impact:
Meissner Corporation is seeking an AI Systems Administrator to own and evolve the systems that host our AI tooling and services. You will deploy, operate, and optimize Linux-based infrastructures, container platforms, and support bridging traditional systems administration, MLOps, and AI-specific operations. In this role you will ensure low-latency, reliable, cost-effective model serving; maintain robust data and model governance; implement observability and incident response;
and streamline integration between development teams and production systems. A genuine interest in AI/ML and some experience supporting GPU workloads or model serving are strong advantages.
- Design, deploy, and maintain Linux servers (on-prem and/or cloud) with a focus on stability, performance, and security.
- Build, manage, and troubleshoot container environments and orchestration platforms such as Docker and Kubernetes.
- Deploy and support AI/ML serving and development environments (model-serving stacks, GPU nodes, CUDA drivers, containerized ML frameworks).
- Design, deploy, and maintain model context protocol (MCP) services to integrate AI solutions with various internal tools and data sets.
- Implement CI/CD pipelines for model and application deployments; integrate container builds and image registries.
- Monitor system and application health; implement observability and alerting (Prometheus, Grafana, ELK, or similar).
- Lead incident response, root cause analysis, and run postmortems for production AI outages or degradations; participate in on-call rotations.
- Harden systems: patching, secure configurations, user access and privilege management, and incident response.
- Collaborate with stakeholders to gather and prioritize requests for features and adjustments to internal AI platforms; translate requirements into technical tasks and deliverables.
- Support web applications and internal AI platform deployments, managing feature requests, change control, and release coordination.
- Assist IT support teams by investigating and resolving incidents and requests related to Linux infrastructure and AI platforms, ensuring timely escalation and communication.
- Support compliance, privacy, and audit requirements for models and data used in AI systems.
- Create and maintain user documentation, runbooks, and operational procedures.
This is an on-site role based out of our headquarters in Camarillo, CA.
The skills and experience you’ll need:
- Bachelor’s degree in Computer Science, Information Systems, or related field, or equivalent experience.
- 3+ years working experience in Linux environments.
- Experience with HTML, CSS, REACT JS, and other web-based coding languages.
- Prior experience in scripting/automation skills (Bash, and preferably Python) to automate routine operations and tooling.
- Prior experience supporting AI/ML workloads: GPU drivers, CUDA/cuDNN, NVIDIA Docker, node scheduling for GPU farms, preferred.
- Prior experience supporting Docker and container lifecycle (security, images, registries, best…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).