×
Register Here to Apply for Jobs or Post Jobs. X

Senior Engineer - LLMOps & MLOps

Job in Lincoln, Lancaster County, Nebraska, 68511, USA
Listing for: Sedgwick
Full Time position
Listed on 2026-03-09
Job specializations:
  • IT/Tech
    AI Engineer, Data Engineer
Salary/Wage Range or Industry Benchmark: 100000 - 125000 USD Yearly USD 100000.00 125000.00 YEAR
Job Description & How to Apply Below

By joining Sedgwick, you'll be part of something truly meaningful. It’s what our 33,000 colleagues do every day for people around the world who are facing the unexpected. We invite you to grow your career with us, experience our caring culture, and enjoy work-life balance. Here, there’s no limit to what you can achieve.

Newsweek Recognizes Sedgwick as America’s Greatest Workplaces National Top Companies

Certified as a Great Place to Work®

Fortune Best Workplaces in Financial Services & Insurance

Role Overview

This is a high-stakes, execution-focused role within the Transformation Office. We are looking for a "day-one" engineer to own the production lifecycle of our AI initiatives. Your mission is to build the automated infrastructure that bridges our legacy data systems with modern AWS and Azure AI services. You will be responsible for the "Ops" of AI: ensuring that LLM applications, RAG pipelines, and traditional ML models are deployable, observable, and scalable in a multi-cloud environment.

Key Responsibilities
  • Multi-Cloud Pipeline Execution:
    Build and maintain automated CI/CD and CT (Continuous Training) pipelines across AWS (Sage Maker/Bedrock) and Azure (AI Studio).
  • LLMOps Framework Implementation:
    Design and execute the infrastructure for Retrieval-Augmented Generation (RAG), including vector database management (Open Search, Pinecone, or Azure AI Search) and semantic index optimization.
  • Legacy Data Connectivity:
    Build the engineering "pipes" to securely ingest and move data from legacy systems (Mainframes, SQL Server, on-prem DBs) into cloud-native MLOps workflows.
  • Automated Model Evaluation:
    Implement systemized frameworks for LLM evaluation (LLM-as-a-judge, ROUGE, METEOR) and traditional ML validation to ensure performance before deployment.
  • Observability & Monitoring:
    Deploy real-time monitoring for model drift, hallucination detection, latency, and token consumption to manage both quality and cost.
  • Infrastructure as Code (IaC):
    Manage all AI resources using Terraform or Cloud Formation, ensuring the cloud posture is reproducible, secure, and follows a "Privacy by Design" mandate.
  • Advanced Analytics Integration:
    Partner with teams using platforms like Palantir, Databricks, or Snowflake to ensure a high-fidelity data flow between analytical ontologies and production models.
  • IT & Security Diplomacy:
    Work directly with central IT and Security to navigate IAM roles, VPC peering, and firewall configurations, clearing the path for rapid transformation.
  • Scalable Inference Engineering:
    Optimize model serving endpoints for high-throughput and low-latency, utilizing containerization (Docker/Kubernetes) and serverless architectures where appropriate.
  • Prompt & Model Versioning:
    Establish rigorous version control for prompts (Prompt Ops), model weights, and data snapshots to ensure 100% auditability and rollback capability.
  • Data Science Engineering:
    Support the data science lifecycle by automating feature stores, feature engineering pipelines, and the transition of experimental notebooks into hardened production microservices.
  • Security & Compliance Hardening:
    Implement automated scanning and guardrails (e.g., Bedrock Guardrails or Azure Content Safety) to prevent prompt injection and data leakage.
Qualifications
  • Education:

    Bachelor’s degree in Computer Science or a related field required;
    Master’s degree in a quantitative discipline highly desirable.
  • Proven Execution: 6+ years of engineering experience, with a minimum of 3 years strictly focused on MLOps or LLMOps in a production environment.
  • AWS & Azure Mastery:
    Deep, hands‑on proficiency in both ecosystems. You must be able to configure Bedrock and Azure OpenAI services, including private networking and endpoint security, on day one.
  • Technical Stack:
    Expert Python, SQL, and PySpark. Extensive experience with containerization (Docker, Kubernetes) and orchestration tools (Airflow, Kubeflow, or Step Functions).
  • LLM Tooling:
    Professional experience with evaluation and observability frameworks like Lang Smith, Arize Phoenix, or Why Labs.
  • Data Science Flavor: A strong understanding of statistical validation, model evaluation metrics, and the ability to partner with…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary