Axial AI Governance Director
Listed on 2026-02-16
-
IT/Tech
AI Engineer, Data Security, Data Scientist
Overview
Axial AI Governance Director
Introduction to role:
Are you ready to shape how responsible AI unlocks value across a global, end-to-end enterprise? This role builds the guardrails and accelerators that let our teams innovate with confidence, ensuring AI advances business performance while protecting patients, data, and reputation. You will join a high-energy, cross-functional group driving a once-in-a-generation transformation. Partnering with product, engineering, legal, privacy, security, and compliance, you will design and operationalize AI governance that enables safe experimentation, fast scaling, and measurable outcomes—from discovery to delivery.
Can you see yourself setting standards that become the benchmark for responsible AI at enterprise scale?
Accountabilities:
- AI Governance Framework and Strategy:
Lead the design and rollout of a comprehensive AI Governance Framework, defining goals, roadmap, and alignment with global AI and data governance strategies to accelerate safe, compliant adoption. - Policies, Standards, and Controls:
Define, monitor, and remediate policies and standards for Generative and Agentic AI and broader AI systems, including control libraries, risk tiers, and exception handling processes that reduce risk while enabling innovation. - Subject Matter Leadership:
Apply deep expertise to resolve complex AI governance challenges, establish demand routing, and influence strategic decisions that shape enterprise-wide AI use. - Forums and Working Groups:
Set direction and manage AI Governance forums and cross-functional working groups to drive cohesive, timely decision-making and accountability. - Risk, Compliance, and Ethical AI:
Design and implement policies balancing benefit and risk, working with privacy, legal, security, and ethics partners to ensure appropriate controls and monitoring for AI data access and usage. - Regulatory Readiness and Audits:
Prepare for audits and partner with governance forums to sustain compliance with evolving AI regulations, demonstrating control effectiveness and continuous improvement. - Incident Response:
Lead responses to AI-related incidents, including root cause analysis, remediation planning, and lessons learned, strengthening resilience and reducing recurrence. - Compliance Checks and Reporting:
Define the approach and drive execution of compliance checks to verify adherence to policy, and report outcomes to collaborators to advise action. - AI Data
Risk Management:
Identify, report, and act upon AI data risks, ensuring issues are surfaced early and addressed decisively. - Environment Management and Enablement:
Develop and implement strategies that enable safe experimentation and testing of AI models while protecting critical data assets and intellectual property. - Embedded Governance:
Work with product and engineering to seamlessly embed governance into build and deployment pipelines, enabling rapid, compliant scaling of new AI capabilities. - Solution Catalog Stewardship:
Manage the catalog of approved AI agents, models, and components to improve reuse, transparency, and quality across programs. - Connector and Tool Governance:
Govern enablement of connectors and external AI tools in close consultation with Architecture, Product, and Engineering to mitigate integration risk and ensure value. - Governance Technology Adoption:
Champion adoption of governance technologies such as automated policy enforcement and AI risk monitoring to improve oversight and efficiency. - AI Standards and Ontologies:
Lead cross-functional development of AI-specific data standards, including conceptual models, glossaries, and ontologies, to improve consistency and interoperability. - AI Data Quality Strategy:
Define data quality strategy and metrics for AI-ready data and AI outputs, ensuring continuous measurement and reporting to partners. - Lifecycle Governance:
Establish procedures to govern each AI model and associated data asset across its full lifecycle, from design to retirement, ensuring traceability and accountability. - Dashboards and Reporting:
Build and maintain dashboards to monitor compliance, risk posture, and control effectiveness, enabling leadership insight and timely intervention. - Collaborator Partnering:
Provide deep subject matter expertise to internal collaborators, influencing program and project direction to achieve compliance and mitigate high-level risks. - AI Literacy, Culture, and Change:
Lead change and improvement initiatives for AI literacy and compliance, devising communication, training, and support strategies that embed responsible AI into everyday practices.
- Proven ability to design and implement an AI Governance Framework aligned to global AI and data governance strategies, goals, and roadmap.
- Experience defining, monitoring, and remediating AI policies and standards for Generative and Agentic AI and broader AI systems, including control libraries, risk tiers, and exception handling.
- Deep subject matter expertise to address complex AI governance…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).