×
Register Here to Apply for Jobs or Post Jobs. X

AI Program Lead, ERM

Job in Boston, Suffolk County, Massachusetts, 02298, USA
Listing for: Liberty Mutual Insurance
Full Time position
Listed on 2026-02-16
Job specializations:
  • IT/Tech
    Data Security, IT Business Analyst, Cybersecurity, Information Security
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

Description

We are seeking a strategic and execution-oriented Responsible AI (RAI) Program Lead to own and evolve the enterprise Responsible AI risk governance framework. This role is accountable for ensuring our use of AI technologies is safe, ethical, and aligned with the firm’s values, risk appetite, and regulatory expectations.

Reporting to the Chief Risk Officer, the RAI Program Lead will design, operate, and continuously improve our Responsible AI governance program across the enterprise. This includes defining and maintaining the RAI operating model, policy and process infrastructure, and governance forums, as well as driving organization-wide awareness and adoption. The role serves as a central risk and governance point of coordination across business, technology, legal, risk, and compliance functions, embedding Responsible AI considerations into day-to-day AI decision‑making and delivery.

This is a risk ownership and governance role within the second line of defense. While the role does not directly build AI systems or tooling, it partners closely with teams that do, providing independent risk perspective, guidance, and oversight. This is an individual contributor role with enterprise‑wide influence, executed through partnership and collaboration across functions.

Key Responsibilities:

Program Strategy & Execution
  • Own and operationalize the enterprise Responsible AI program roadmap, including capabilities, milestones, KPIs, and maturity assessments
  • Partner with senior stakeholders to integrate Responsible AI objectives into enterprise strategy, data and model governance, and AI‑enabled product development
Governance & Policy
  • Lead the operation of Responsible AI governance forums (e.g., Steer Co, working groups), including agenda‑setting, materials, action tracking, and executive reporting
  • Develop, maintain, and evolve Responsible AI policies, standards, and procedures aligned with internal risk appetite and emerging global regulation, in close partnership with Model Risk Management, Third Party Risk Management, Legal, Compliance, and Enterprise Risk Management
Process Design & Risk Oversight
  • Design and maintain scalable Responsible AI processes for AI risk assessments, use case reviews, approvals, and issue escalation within a federated operating model
  • Provide risk oversight of AI/ML use cases across their lifecycle, including risk tiering, documentation standards, and lifecycle controls
  • Identify, assess, and elevate material Responsible AI risks and control gaps to appropriate governance forums and senior leadership
Training, Culture, & Change Management
  • In partnership with Legal and Compliance, drive enterprise Responsible AI awareness and training through learning programs, communications, and community or ambassador networks
  • Collaborate with Talent, Technology, and Learning partners to embed Responsible AI principles into onboarding, role‑based expectations, and ways of working
Risk, Compliance, and Regulatory Alignment
  • Serve as a key liaison to Legal, Compliance, Risk, and Audit to ensure alignment with regulatory expectations and internal control frameworks
  • Monitor evolving AI technologies, internal use cases, and external regulations and standards (e.g., EU AI Act, U.S. Executive Orders, ISO/IEC 42001), and recommend program and policy updates to governance bodies as needed
Metrics & Reporting
  • Define and deliver regular reporting on Responsible AI program effectiveness, issues, and risk trends to executive leadership and board‑level committees
  • Support external disclosures, regulatory inquiries, and internal audits related to AI governance, as required
Qualifications
  • Competencies typically acquired through a Bachelor’s degree in a quantitative field and 10+ years of relevant experience
  • Advanced degree (MBA or equivalent) is highly preferred, as is having a professional qualification in one or more areas of enterprise risk management or its equivalent
  • 7+ years of experience in risk management, governance, program management, or policy roles at the intersection of technology, data, and compliance
  • Demonstrated experience owning or being accountable for enterprise‑level governance programs related to…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary