Manager, Governance, Risk & Compliance
Listed on 2026-02-16
-
IT/Tech
Cybersecurity, Data Security, Information Security
Overview
USRC's greatest strength in being a leader in the dialysis industry is our ability to recognize and celebrate the differences in our diverse workforce. We strongly believe in recruiting top talent and creating a diverse and inclusive work climate and culture at all levels of our organization.
The Manager, Governance, Risk & Compliance (GRC) is accountable for leading the organization's enterprise risk and compliance program - spanning third‑party risk management, audit readiness and execution, policy governance, and GRC platform administration - while ensuring alignment with regulatory and industry frameworks (HIPAA, HITRUST, SOC 2, PCI DSS, NIST 800, NIST RMF). This role provides program ownership, establishes KPIs/metrics, and drives cross‑functional execution with business, technology, and external partners, while enabling safe, compliant and scalable AI adoption.
The manager surfaces and provides recommendations on risk treatment, control priorities, and vendor remediation expectations, and serves as a primary point of contact to auditors, vendors, assessors, and senior business and IT stakeholders.
Summary
Essential Duties and Responsibilities include the following. Other duties and tasks may be assigned.
Responsibilities- Define the GRC program strategy, roadmap, and success metrics; align initiatives with organizational risk appetite and business objectives.
- Establish and continuously improve governance processes, control frameworks, and report to leadership and risk committees.
- Operationalize an enterprise AI governance framework covering model development, procurement, deployment, monitoring, and retirement.
- Classify AI systems by risk tier (e.g., clinical decision support, operational automation, administrative copilots) and ensure proportional controls are applied
- Oversee enterprise risk identification, assessment, and treatment plans; ensure timely remediation tracking and executive reporting.
- Approve risk ratings and risk acceptance recommendations; escalate material risks and propose mitigation investments.
- Identify, assess, and document AI-specific risks, including, Model bias and discrimination, Hallucinations and clinical safety risks, Model drift and data quality degradation, Data leakage and IP exposure, Inappropriate secondary use of data.
- Define and monitor Key Risk Indicators (KRIs) and Key Control Indicators (KCIs) for AI systems.
- Lead the third‑party/vendor risk program: methodology, tiering, due diligence, gap analysis, remediation SLAs, and performance metrics.
- Extend third-party risk management practices to AI vendors and embedded AI capabilities (e.g., EHR-integrated AI, ambient listening tools, SaaS copilots).
- Evaluate vendors' model transparency and explainability, training data provenance, security and privacy safeguards, model update and retraining practices
- Partner with Procurement and Legal to ensure AI-specific contractual safeguards (e.g., data usage restrictions, audit rights, indemnification).
- Own planning and execution for internal and external audits (e.g., SOC 2, HIPAA, HITRUST), including evidence management, control validation, issues tracking, and management responses.
- Interpret and translate evolving AI-related regulatory and enforcement expectations into actionable controls, particularly as they intersect with healthcare regulations.
- Ensure AI use cases comply with patient safety and quality standards, Privacy and data-protection obligations, Clinical documentation and auditability requirements
- Support internal and external audits by producing AI governance artifacts, risk assessments, control evidence, and model documentation.
- Maintain continuous audit readiness through control testing, corrective actions, and compliance dashboards.
- Govern the policy lifecycle (creation, approval, publication, attestation, and exceptions) for security policies, standards, and procedures.
- Produce executive‑level reporting and risk narratives; respond to security‑related inquiries from internal and external stakeholders.
- Define and maintain AI policies, standards, and control objectives aligned to responsible AI principles (fairness, transparency, accountability, safety, and…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).