AI Governance — Australia

Executive deckview in browser · download PDF

Rolling dossier on AI governance and risk-management expectations for Australian regulated entities (APRA-regulated financial services, plus ASIC and OAIC overlay). Every claim cites a synthesis. Append new sources at the bottom of each section; refactor periodically.

Current state (as of 2026-05-12)

Australian regulators have shifted from principle-based guidance to active supervision of AI risk. The signals driving this:

  • APRA letter to industry on AI (2026-04-30) — the clearest signal yet that AI governance at most regulated entities is lagging adoption, with traditional risk frameworks unfit for AI’s behaviour. AI must be treated as a specific risk domain integrated into the enterprise risk management framework, not “just another technology”. [[2026-05-08-apra-ai-governance]]
  • ASIC open letter (2026-05-08) — to AFS licensees and market participants, calling boards and executives back to first principles on cyber resilience in light of frontier AI models (explicit reference to Anthropic’s Mythos; see 2026-04-21-firefox-mythos-zero-days for vendor-side capability evidence). [[2026-05-08-apra-ai-governance]]
  • ASIC REP 798 “Beware the Gap” — earlier finding of a governance gap across financial services. [[2026-05-08-apra-ai-governance]]
  • OAIC has moved to clarify how existing privacy laws apply to AI. [[2026-05-08-apra-ai-governance]]
  • Operational evidence backing the regulator narrative. CyberCX’s 2026 Threat Report shows Financial and Insurance Services overtook Healthcare as the most-impacted sector (18% vs 12%) for the first time, and cyber extortion overtook BEC as the #1 incident type (26%). The empirical picture aligns with APRA / ASIC’s “act now” framing — but the report itself is silent on APRA, ASIC, OAIC and the Privacy Act, a notable gap for an AU/NZ board audience. [[2026-05-12-cybercx-2026-threat-report]]
  • Quantum cryptography as a paired horizon risk. AFR (11 May 2026) frames quantum computing and frontier-AI offence as a single emerging cyber-threat for major banks, with the card-payments industry preparing post-quantum cryptography (PQC) migration “by 2030”. HSBC’s head of quantum, Philip Intallura, on-record: the quantum-security uplift is “just around the corner”. APRA’s 30 April AI letter is silent on cryptographic agility; the AFR framing opens a distinct substrate-risk axis that CPS 234 / CPS 230 may eventually need to reach. (Partial content — AFR paywall; only standfirst + lede captured.) [[2026-05-11-afr-quantum-banks]]
  • Controls-side empirical evidence joins the incident-side picture. CyberCX’s 2026 Hack Report — companion to its DFIR Threat Report — adds three-year offensive-security data (7,500+ engagements, 70,000+ findings) showing severe-finding rates improving slowly (33.5% → 29.0% from 2023–2025) but AI penetration tests finding severe issues at 50% — double the 26% web-app rate. Financial Services has the second-lowest severe-finding rate (22.0%) yet is the most-impacted DFIR sector (18%); CyberCX frames this as a target-value-over-controls-maturity story. The Hack Report is also silent on APRA/ASIC/OAIC — consistent with the DFIR report and now a confirmed editorial pattern. [[2026-05-12-cybercx-2026-hack-report]]

The consolidated message: AI-related risk is a board-level accountability; proportionate, evidenced action is expected now; enforcement is on the table.

Board expectations and directors’ duties

  • APRA expects boards to maintain sufficient AI literacy for strategic direction and effective challenge, oversee AI strategy aligned to risk appetite, oversee monitoring/reporting (including third-party risk), and define triggers that prompt timely action. [[2026-05-08-apra-ai-governance]]
  • APRA treats the gap in board AI capability as itself a governance and control risk. Reliance on vendor materials without independent challenge is flagged. [[2026-05-08-apra-ai-governance]]
  • APRA views AI oversight as a core accountability and part of discharging existing director duties (duty of care and diligence, best interests). [[2026-05-08-apra-ai-governance]]
  • Super trustees face an additional constraint: discretions requiring personal/human involvement cannot be fettered. AI is permitted for administrative/supportive functions but not as a substitute for human involvement in key trustee discretions. [[2026-05-08-apra-ai-governance]]

FAR (Financial Accountability Regime) accountability

  • Clarity of executive ownership across the AI lifecycle is a regulatory expectation. Accountable persons must understand how AI is used within their remit, the specific risks introduced, and ensure those risks stay within Board-approved appetite. [[2026-05-08-apra-ai-governance]]
  • An AI governance framework should tie together the risk management framework, AI use policy, and AI system register; effectiveness depends on the three lines of defence being equipped to operationalise it. [[2026-05-08-apra-ai-governance]]

Risk classes reshaped by AI

AI is changing the risk profile across virtually every risk class — financial, operational, conduct, strategic. Specifically:

  • Financial — credit decisioning, algorithmic trading, insurance underwriting. [[2026-05-08-apra-ai-governance]]
  • Operational — cyber, data privacy, model integrity, third-party supplier risk, fraud and scams (internal and across distribution chain). [[2026-05-08-apra-ai-governance]]
  • Conduct — fairness/equity/transparency in credit approvals, claims, super benefit determinations, treatment of vulnerable customers. [[2026-05-08-apra-ai-governance]]
  • Strategic — AI accelerates business-model change for the entity and its counterparties. [[2026-05-08-apra-ai-governance]]

Traditional point-in-time, sample-based assurance is ill suited to probabilistic models that learn, adapt and degrade. APRA expects integrated, continuous assurance approaches. [[2026-05-08-apra-ai-governance]]

Cyber and information security

  • Threat side: AI is increasing volume and sophistication of attacks — prompt injection, data leakage, insecure integrations, agent misuse, faster coordinated attacks. [[2026-05-08-apra-ai-governance]]
  • Defensive side: APRA observed gaps in IAM, patching/vulnerability remediation, testing of AI systems and AI-generated code, and use of enterprise AI tools outside approved control frameworks. [[2026-05-08-apra-ai-governance]]
  • Control posture: many entities rely on policy and detective controls rather than enforceable technical / preventative controls. [[2026-05-08-apra-ai-governance]]
  • Counter-evidence on the defender side: Mozilla’s reported use of Claude Mythos Preview to find 271 vulnerabilities in Firefox (fixed in Firefox 150) is the first public evidence in this KB that the same class of frontier model can also work for defenders. Open governance question: when does failing to use available AI-driven defensive analysis itself become a control gap? [[2026-04-21-firefox-mythos-zero-days]]
  • Operational evidence APRA’s “preventative controls > policy” critique is correct: CyberCX’s 2025 casebook found that “every BEC incident where traditional MFA was enforced … involved session hijacking” — i.e. policy-level MFA mandates without phishing-resistant technical implementation produced zero defensive value in BEC. For AU regulated entities, “MFA enforced” can no longer be a credible answer to a board control question. [[2026-05-12-cybercx-2026-threat-report]]
  • Operational evidence backing the Financial Services exposure narrative: Financial & Insurance Services was the most-impacted sector in CyberCX’s 2025 data (18%, up from ~11%) — the empirical correlate of APRA’s “highly regulated, complex digital environments, third-party infrastructure” framing. [[2026-05-12-cybercx-2026-threat-report]]
  • Offensive GenAI is no longer hypothetical in AU/NZ incident response. CyberCX recorded a 2025 case of a threat actor using GenAI to write bespoke scripts and payloads — the first such observation in its DFIR practice. The APRA “AI accelerates attacks” thesis now has supporting incident-level evidence. [[2026-05-12-cybercx-2026-threat-report]]
  • AI data spills as a DFIR engagement category are the operational analogue of APRA’s “enterprise AI tools used outside approved control frameworks” gap — staff pasting sensitive data into public AI portals, often with no DLP and no enterprise licensing, making the spill unquantifiable. Policy-only controls fail by construction here. [[2026-05-12-cybercx-2026-threat-report]]
  • The “soft chewy centre” persists empirically. CyberCX’s controls-side data: Active Directory assessments returned a severe finding 78% of the time in 2025; internal network pen-tests 71%. External pen-tests improved sharply (22.3%). This is the operational picture behind APRA’s “preventative > policy / detective controls” critique — perimeter hardening is happening, internal-network and identity-fabric hardening is not. [[2026-05-12-cybercx-2026-hack-report]]
  • AI systems fail security testing at twice the web-app rate. 50% of CyberCX’s 2025 AI pen-tests found a severe vulnerability vs 26% for web-application pen-tests. CyberCX’s framing: AI systems are “deployed to production at a lower level of security maturity than other systems” because traditional threat-modelling-at-design and pen-test-before-deployment patterns “are often not fit for the pace and urgency of AI development.” This is now the strongest single AU/NZ data point behind APRA’s “controls lag adoption” claim. [[2026-05-12-cybercx-2026-hack-report]]
  • Social engineering still wins 77% of the time in CyberCX’s data — third-most-successful service after AD assessments and DDoS testing. AI-driven voice/video deepfakes are an emerging force multiplier; CyberCX’s case study has a deepfaked CEO voice failing against a hardened service-desk identity-verification process, with the editorial qualifier “most other organisations have not undertaken similar reviews of their processes and are unlikely to be as resilient.” APRA’s “controls lag” framing extends to process design, not just technical controls. [[2026-05-12-cybercx-2026-hack-report]]
  • “Vibe-coding to production” is a stated AU/NZ engagement category, not a hypothesis. “CyberCX has conducted architecture reviews and penetration tests for a significant number of systems that were built primarily by AI. Often this is by organisations that have done no internal development prior.” This is the development-side mirror of APRA’s “enterprise AI tools outside approved control frameworks” gap and is arguably more dangerous: orgs shipping AI-built systems without ever having had an AppSec discipline at all. [[2026-05-12-cybercx-2026-hack-report]]
  • The Financial Services “controls maturity ≠ exposure” paradox is now empirically established. Hack Report: Financial Services & Insurance has the second-lowest severe-finding rate (22.0%) of any sector. DFIR Threat Report: same sector is the most-impacted (18% of incidents). CyberCX: “financially motivated threat actors select targets not just by the prevalence of vulnerabilities, but on their ability to monetise attacks. Financial services are an attractive target simply because that is where the money is.” APRA’s regulator framing is predicated on target value, not relative controls weakness, and that distinction now has paired CyberCX datasets behind it. [[2026-05-12-cybercx-2026-hack-report]] [[2026-05-12-cybercx-2026-threat-report]]

Third-party and supply-chain risk

  • Supplier risk is increasingly pronounced. AI is embedded in platforms; supply chains are complex and opaque; many organisations depend on a small number of providers with limited upstream visibility (training data, fourth parties). [[2026-05-08-apra-ai-governance]]
  • Contractual arrangements lag practice (audit rights, model changes, data handling). [[2026-05-08-apra-ai-governance]]
  • APRA expects entities to map dependencies, improve contractual protections, maintain visibility over model behaviour, and actively manage concentration risk — required to comply with CPS 230 operational-resilience and supplier-risk obligations. [[2026-05-08-apra-ai-governance]]
  • Source-code-management and CI/CD platforms are now a measured AU/NZ attack-surface. CyberCX pen-testing requests for SCM and CI/CD platforms more than doubled in 2025; many run on default configuration with little hardening. “Nearly every major supply-chain incident in the last 12 months involved stolen credentials from a developer or CI/CD system.” The CPS 230 supplier-risk frame now has to extend inward — your own SCM/CI-CD stack is itself a supplier-grade attack surface against which APRA’s contractual and assurance expectations apply. [[2026-05-12-cybercx-2026-hack-report]]

Embedded AI: governance via silent software updates

A distinct supply-chain failure mode: AI capability arriving at the entity not through a procurement decision but through ordinary platform/browser/OS updates, with no opt-in moment.

  • Concrete example: Google’s Chrome ships Gemini Nano (~4 GB local model file weights.bin in OptGuideOnDeviceModel/) to endpoints by default; opt-out is enterprise-policy gated (GenAILocalFoundationalModelSettings) and not opt-in. If deleted, the model file auto-reinstalls. [[2026-05-10-vizza-chrome-silent-llm]]
  • Governance implication: procurement-style AI controls (approved-tool lists, ethics committees, vendor-risk checklists) only bind deliberate adoption. They do not see AI shipped via routine software updates. Tony Vizza’s framing: “AI governance may ultimately only be as strong as the next silent software update.” [[2026-05-10-vizza-chrome-silent-llm]]
  • “Shadow AI” generalises Shadow IT: not employees bringing in unsanctioned tools, but endpoints and SaaS platforms quietly acquiring AI capabilities themselves with each update. The footprint is dynamic, not static. [[2026-05-10-vizza-chrome-silent-llm]]
  • Visibility precedes control: the binding constraint is detection, not permission. An AI policy that the AI runtime is unaware of is not enforced. Endpoint AI inventory needs to become a continuous control, not an annual exercise. [[2026-05-10-vizza-chrome-silent-llm]]
  • Connects directly to CPS 230 contractual uplift: vendor change-management contracts need explicit AI clauses (notification of new AI features, default-state, opt-out availability, data-handling, retention). [[2026-05-10-vizza-chrome-silent-llm]]
  • Browser/OS/MDM policy is now an AI-governance control surface: AI policy that does not extend into MDM, group policy, and browser-policy artefacts will not bind reality. BYOD remains a gap even where managed endpoints are policy-controlled. [[2026-05-10-vizza-chrome-silent-llm]]
  • Privacy-impact reviews need a “default-on” branch: APP 6 / APP 11 considerations apply equally whether the AI was deliberately adopted or arrived via update; OAIC’s existing-laws-apply posture means there is no grace period for embedded AI. [[2026-05-10-vizza-chrome-silent-llm]]

Cryptographic agility and post-quantum migration

A distinct substrate risk: even if AI-related controls are sound, the cryptography under the banking system has a finite shelf life against future cryptographically-relevant quantum computers (CRQC). “Harvest now, decrypt later” means the exposure starts before CRQC arrival.

  • AU banks publicly preparing PQC for the card payments system by 2030, per AFR’s framing of industry plans. HSBC’s head of quantum, Philip Intallura, on the record that the quantum-security uplift is “just around the corner”. [[2026-05-11-afr-quantum-banks]]
  • AFR pairs quantum and frontier-AI offence as a single narrative for bank boards; APRA’s 30 April letter does not engage cryptographic agility, and CyberCX’s 2026 Threat Report does not mention quantum. The pairing is editorially useful but technically distinct — these are non-interchangeable horizon risks. [[2026-05-11-afr-quantum-banks]]
  • Open governance question: whether APRA issues a CPS 234 / CPS 230 letter analogous to the AI letter, naming cryptographic agility and PQC migration expectations. No such signal in the KB yet. [[2026-05-11-afr-quantum-banks]]

Diagnostic questions for boards / accountable persons

Six questions MinterEllison effectively poses (paraphrased) to test current state: AI risk literacy, alignment to Risk Appetite Statement, independence from vendor narrative, super-trustee discretion boundaries, FAR-accountable-person evidence, and three-lines technical capability for continuous AI assurance including third/fourth-party dependencies under CPS 230. [[2026-05-08-apra-ai-governance]]

Open threads to watch

  • How APRA exercises “stronger supervisory action” in practice — first enforcement signals
  • Whether OAIC issues AI-specific guidance vs. continuing to clarify existing laws
  • ASIC follow-through on the 2026-05-08 cyber letter (any thematic review?)
  • Industry response to CPS 230 + AI: contract uplift, concentration-risk mapping
  • Whether APRA’s supervision extends to embedded AI (AI an entity runs by default without ever choosing) — not just deliberately adopted AI [[2026-05-10-vizza-chrome-silent-llm]]
  • Telemetry / data-flow profile of default-on endpoint models such as Gemini Nano — currently not documented in vendor materials captured in this KB
  • Whether APRA / ASIC issue an explicit cryptographic-agility / PQC-migration expectation analogous to the 30 April AI letter (or whether quantum stays within CPS 234 implicitly) [[2026-05-11-afr-quantum-banks]]
  • Whether the AU majors publicly stand up “head of quantum” or equivalent cryptographic-agility roles (HSBC has done so internationally) [[2026-05-11-afr-quantum-banks]]
  • Whether AU regulators name MCP-specific authentication risk as a control expectation, given CyberCX’s observation that “data can flow bi-directionally between servers and clients … creating a rise in authentication-related issues with MCP implementations.” No regulator letter in the KB has named MCP. [[2026-05-12-cybercx-2026-hack-report]]
  • Whether “vibe-coded” production systems (AI-built by orgs with no prior internal development) become a stated regulator risk class. APRA’s “AI used outside approved control frameworks” gap is about consumption; the Hack Report records the production mirror. [[2026-05-12-cybercx-2026-hack-report]]

Sources

  • [[2026-05-08-apra-ai-governance]] — MinterEllison synthesis of APRA’s 2026-04-30 letter and ASIC’s 2026-05-08 letter
  • [[2026-04-21-firefox-mythos-zero-days]] — Mozilla on Claude Mythos Preview and the offence-defence rebalance (cross-referenced for cyber section)
  • [[2026-05-10-vizza-chrome-silent-llm]] — Tony Vizza commentary on The Register’s reporting that Chrome ships Gemini Nano by default; canonical “embedded AI / silent update” example for the supply-chain section
  • [[2026-05-12-cybercx-2026-threat-report]] — CyberCX 2026 annual threat report; operational evidence on sectoral exposure, MFA bypass, first offensive GenAI observation, AI data spills
  • [[2026-05-11-afr-quantum-banks]] — AFR (James Eyers) on AU banks’ paired quantum + AI cyber-threat exposure and the 2030 PQC card-payments target; partial-content capture (paywall)
  • [[2026-05-12-cybercx-2026-hack-report]] — CyberCX STA 2026 Hack Report; three-year controls-side dataset, AI pen-test severe-finding rate 2x WAPT, Financial Services controls/exposure paradox, MCP-as-attack-surface, vibe-coding-to-production

See also