APRA Sharpens Expectations on AI Governance and Risk Management — Synthesis

Source article: MinterEllison Technical Update, “APRA sharpens expectations on AI governance and risk management” (08.05.2026, 6 minute read). Authors: Ashley Rockman, Jason McQuillen, Mark Teys, Siobhan Doherty, Sam Burrett, Chelsea Gordon. Underlying regulatory action: APRA letter to industry on AI, 30 April 2026.


Headline message

APRA’s 30 April 2026 letter is, in MinterEllison’s view, the clearest signal yet that AI governance at most regulated entities is lagging adoption, and that traditional risk frameworks aren’t built for the way AI behaves. The letter marks a transition from principle-based guidance to active supervision of AI risk. Entities must treat AI as a specific risk domain — not just another technology — and integrate it into the enterprise risk management framework, because AI is materially changing the risk profile across most risk classes.

APRA Member Therese McCarthy Hockey, quoted in the article:

“Where entities fail to adequately identify, manage or control AI risks in a manner proportionate to their size, scale and complexity, we will take stronger supervisory action and, where appropriate, pursue enforcement.”

Key takeouts (as framed by MinterEllison)

  • AI adoption is accelerating across financial services, but governance, risk management, assurance and security practices are not keeping pace. APRA has identified material gaps that increase operational, cyber and compliance risk exposure.
  • Boards and executives are expected to strengthen AI competency, align AI strategy with risk appetite, and explicitly measure the impact of AI on every risk class — with robust oversight across the full AI lifecycle, including third-party dependencies and critical operations.
  • APRA will increase supervisory scrutiny. The Board, accountable executives and all three lines of defence must understand how AI changes the risk profile and respond urgently to keep risks within appetite. Failure will lead to regulatory intervention or enforcement.

Wider regulatory context the article situates this in

  • ASIC’s “Beware the Gap” report (REP 798) — identified a similar ‘governance gap’ across financial services.
  • OAIC — has moved to clarify how existing laws apply to AI.
  • ASIC open letter, 8 May 2026 — to AFS licensees and market participants, calling on boards and executives to act now, with discipline and urgency, to strengthen cyber resilience fundamentals in light of the step-change in capability presented by frontier AI models (the article specifically references Anthropic’s Mythos). ASIC, like APRA, expects cyber risk management to be demonstrably effective, evidence-based and proportionate.

Read together, the message is unambiguous: AI-related risk is a board-level accountability, and the time for proportionate, evidenced action is now.


1. Board expectations and directors’ duties

APRA expects boards to lead AI strategy and risk-management oversight, enabled by fit-for-purpose AI governance. While interest in AI’s strategic potential is high, many boards are still developing the technical literacy needed for effective oversight. At a minimum, APRA expects boards to:

  • maintain sufficient understanding and literacy of AI to set strategic direction and provide effective challenge and oversight;
  • oversee an AI strategy consistent with the entity’s risk appetite and tolerance settings;
  • oversee effective monitoring and reporting (including third-party risk); and
  • ensure clearly defined triggers, aligned to resilience objectives, that enable timely action when AI is not operating as expected.

For many entities this requires uplift in board capability — not just on AI use, but on how AI changes the risk profile across the organisation relative to the Board-approved Risk Appetite Statement.

The letter warns against reliance on vendor materials without sufficient independent challenge. Low AI literacy creates a risk that key issues — such as model unpredictability and impacts on critical operations — are not fully understood. APRA treats the gap in board AI capability as itself a governance and control risk.

The legal implication MinterEllison draws: APRA views AI oversight as a core accountability and part of discharging existing director duties, including the duty to act with due care and diligence and in the best interests of the organisation.

Superannuation trustees — additional dimension. Trustees are required to exercise certain discretions personally (with human involvement) and cannot fetter those discretions. AI may be used for administrative or supportive functions (e.g. collecting and collating information), but trustee boards face limitations on approving AI as a substitute for human involvement in key discretions.

Practical instruction from the authors: those briefing boards must avoid commercially optimistic narratives about AI’s benefits without equally addressing the risks and the controls that mitigate them. Boards should ensure any consideration of AI issues addresses the associated risks.

2. FAR accountability

APRA’s observations raise critical considerations under the Financial Accountability Regime. Clarity of executive ownership and accountability across the AI lifecycle is a clear regulatory expectation. AI changes the risk profile across most key functions and critical operations, so accountable persons must understand, on an ongoing basis, how their functions are impacted and what response is required.

In practice, accountable persons must:

  • understand how AI is used within their remit and the specific risks that use introduces; and
  • ensure those risks are monitored and managed within the Board-approved risk appetite and in compliance with the regulatory framework.

Without this, effective risk management and assurance — and FAR compliance — will be difficult to demonstrate.

This points to the need for an AI governance framework that ties together the relevant artefacts (risk management framework, AI use policy, AI system register), sets over-arching structures and principles, and shows how they fit. But framework alone is not enough — its effectiveness rests on implementation, particularly whether the three lines of defence are equipped to operationalise it.

3. Risk management, assurance and the three lines of defence

APRA observed a tendency to treat AI risk as “just another technology” — a framing that understates AI’s distinct characteristics: adaptive behaviour, probabilistic outputs, bias risk and heightened data and privacy exposure. These characteristics are changing the risk profile across virtually every risk class:

  • Financial risk — credit risk from automated credit decisioning, algorithmic trading, insurance underwriting.
  • Operational risk — cyber, data privacy, model integrity, third-party supplier risk, fraud and scams (within the organisation and across the distribution chain).
  • Conduct risk — fairness, equity and transparency in credit approvals, claims management, superannuation benefit determinations and the treatment of vulnerable customers.
  • Strategic risk — AI is accelerating the pace of business-model change for the entity itself, and for the customers and suppliers it depends on.

Traditional “point in time” and sample-based assurance methods are ill suited to probabilistic models that learn, adapt and degrade over time.

The expectation:

  • Line 1 risk owners must understand how AI changes the risks they own, and ensure risk management and internal controls evolve accordingly.
  • Line 2 risk function must be able to identify, measure, manage and report aggregate risk exposure across every risk class — before and after the impact of AI.
  • For assurance, APRA expects integrated, continuous approaches supported by appropriate skills, tools and lifecycle-based risk assessments. The article notes some leading global banks are already deploying AI agents to independently monitor, assure and test outcomes from agentic AI workflows.

All three lines of defence will need sufficient technical capability and capacity, with continuous collaboration between assurance functions and those accountable for AI strategy and governance.

4. Cyber and information security

A central theme of APRA’s letter is the changing cyber landscape:

  • Threat side — AI is increasing both the number and sophistication of attack pathways: prompt injection, data leakage, insecure integrations, misuse of autonomous agents. AI also enables faster, more coordinated attacks, compressing response timeframes.
  • Defensive side is lagging — APRA identified gaps in identity and access management, delays in patching and vulnerability remediation, insufficient testing of AI systems and AI-generated code, and use of enterprise AI tools outside approved control frameworks.
  • Control posture — many entities rely on policy and detective controls rather than enforceable technical restrictions or preventative controls.

5. Third-party and supply-chain risk

Supplier risk is becoming increasingly pronounced. AI capabilities are embedded in platforms and services, creating complex, opaque supply chains. Many organisations depend on a small number of providers, with limited visibility over upstream models, training data and fourth-party dependencies.

Contractual arrangements lag practice — often failing to address audit rights, model changes or data handling. The visibility gap widens as entities increasingly rely on technology providers, most of which are themselves using AI.

APRA expects entities to:

  • map dependencies;
  • improve contractual protections;
  • maintain visibility over model behaviour; and
  • actively manage concentration risk —

all of which is required to comply with the explicit operational resilience and supplier-risk requirements of CPS 230.

ASIC’s 8 May 2026 letter reinforces this as urgent and critical: frontier AI models such as Anthropic’s Mythos will test existing controls more often and under greater pressure, and boards and executives are called back to first principles on cyber resilience with greater focus and intensity than ever before.

6. Where to from here — MinterEllison’s call to action

Taken together, APRA’s observations send a clear message: AI presents an unprecedented opportunity, but it also introduces material risk that must be managed with controls matched to the unique characteristics of AI systems. Where entities fail, APRA will increase supervisory scrutiny and, where necessary, pursue enforcement.

The immediate priority for boards and executives is to ensure governance, security and assurance practices are keeping pace with AI adoption. That has two parts:

  1. Specific accountability for AI risk — ownership and management within appetite across the full AI lifecycle.
  2. Integration of AI’s impact into the existing risk management framework and the three lines of defence model — so AI is not handled as a parallel track, but as a domain that reshapes every risk class.

Reading guide for boards and accountable persons

If you sit on a regulated entity’s board, executive committee or risk committee, the article effectively poses six questions to test against your current state:

  1. Can our directors describe, in their own words, how AI changes our risk profile and what triggers would prompt board action?
  2. Is our AI strategy demonstrably aligned to a Board-approved Risk Appetite Statement, with measurable thresholds?
  3. Are board materials on AI grounded in independent challenge — not vendor narrative?
  4. For super trustees: where is the line between AI-supported administration and trustee discretions that must remain human?
  5. Do our FAR-accountable persons each understand the AI in scope of their accountabilities, and can they evidence ongoing oversight?
  6. Are Lines 1, 2 and 3 technically capable of identifying, measuring and assuring AI risk continuously — including across third-party and fourth-party dependencies and within CPS 230 operational-resilience obligations?

If the honest answer to any of these is “not yet”, the article’s conclusion is that the time for proportionate, evidenced action is now — before APRA tests it.


Synthesis prepared for study purposes from the MinterEllison Technical Update dated 8 May 2026. Not legal advice.


  • Topic dossier: ai-governance-au
  • Entities: apra · asic · oaic
  • Related instruments mentioned: CPS 230 (operational resilience), Financial Accountability Regime (FAR), ASIC REP 798 “Beware the Gap”