AI Lead Engineer - AWS Platform
Denver, Denver County, Colorado, 80285, USA
Listed on 2026-02-16
-
IT/Tech
AI Engineer, Data Engineer, Machine Learning/ ML Engineer
Overview
We are modernizing our enterprise data and analytics ecosystem by embedding AI and Generative AI capabilities across Policy, Claims, Billing, and Administrative systems. As the AI Lead Engineer – AWS Platform, you will play a key role in supporting The Mutual Group (TMG), Guide One Insurance, and future members by architecting, designing, and leading the delivery of end-to-end AI/ML and Generative AI solutions on AWS, leveraging Bedrock, Sage Maker, Lambda, Step Functions, Glue, and Vector Databases.
DepartmentInformation Technology
Work Arrangement- Employees who live within 30 miles of the TMG home office are expected to follow a hybrid or in-office schedule. The initial training period may require additional in-office days.
AI Platform Architecture & Strategy
- Lead the design and implementation of a scalable, enterprise-grade AI platform on AWS, integrating LLMs, Generative AI, and traditional ML models.
- Define architectural standards for LLM orchestration, RAG pipelines, and AI model lifecycle management.
- Design Medallion-based AI data architecture connecting Policy, Claims, Billing, and Administration systems for unified analytics and AI-driven insights.
- Partner with enterprise architects to align AI initiatives with cloud modernization, data governance, and security frameworks.
- Evaluate new AWS services (Amazon Q, Bedrock Agents, Titan, Sage Maker Hyper Pod) for platform scalability and business alignment.
Model Development, Deployment & Operations
- Lead development and fine-tuning of LLMs, transformers, and generative models using Amazon Sage Maker, Bedrock, or custom frameworks.
- Architect and oversee end-to-end MLOps pipelines — from training and validation to deployment, monitoring, and retraining — using Code Pipeline, Sage Maker Model Monitor, and Cloud Watch.
- Implement retrieval-augmented generation (RAG) workflows integrating Vector DBs (Kendra, Pinecone, or Weaviate) for grounded, domain-specific AI responses.
- Ensure production-grade model serving, scaling, and versioning with Sage Maker endpoints, Lambda, and Step Functions orchestration.
Intelligent Data Processing & Automation
- Architect data ingestion pipelines to process multimodal content (PDFs, images, audio, emails, structured/unstructured data) using AWS Glue, Textract, Transcribe, and Comprehend.
- Lead the design of AI-driven automation workflows for classification, summarization, and entity extraction across insurance documents.
- Optimize pipelines for performance, scalability, and cost efficiency through serverless and event-driven architectures.
MLOps, Dev Ops & Infrastructure Automation
- Define and implement CI/CD practices for AI/ML using AWS Code Pipeline, Code Build, and Terraform/Cloud Formation.
- Standardize infrastructure-as-code and environment provisioning across development, staging, and production.
- Integrate monitoring, observability, and audit logging into all AI components to ensure reliability and compliance.
- Drive adoption of containerized model deployments via Sage Maker Jump Start, EKS, or Docker-based inference endpoints.
Responsible AI, Governance & Security
- Establish Responsible AI frameworks covering model explainability, fairness, safety, and bias detection.
- Configure Bedrock Guardrails and implement safety layers to prevent hallucinations and enforce ethical responses.
- Ensure compliance with HIPAA, SOC2, and data privacy laws through secure data handling, encryption, and audit trails.
- Partner with Info Sec, Legal, and Risk teams to align AI development with enterprise governance policies.
Leadership, Collaboration & Mentorship
- Lead a cross-functional team of AI engineers, MLOps specialists, and data scientists, providing technical direction and mentorship.
- Collaborate closely with business stakeholders, architects, and product teams to identify high-impact AI use cases.
- Drive AI Center of Excellence (CoE) initiatives—develop best practices, reusable components, and internal knowledge repositories.
- Promote a culture of experimentation, continuous learning, and responsible AI adoption across the enterprise.
- Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Engineering, or related…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).