New framework outlines best practice for AI in higher education — Synthesis
Source: Curtin University media release, “New framework outlines best practice for AI in higher education” (2025-12-08).
Authors: Curtin University media team (announcing work led by external researchers).
Underlying action / event: Publication of the Australian Framework for Artificial Intelligence in Higher Education by the Australian Centre for Student Equity and Success (ACSES, hosted at Curtin), with co-authorship from researchers at six Australian universities and links to the Australian National AI in Schools Taskforce. PDF: assets.acses.edu.au/app/uploads/2025/12/Lodge-et-al-2025-Australian-Framework-for-Artificial-Intelligence-in-Higher-Education.pdf.
Retrieval caveat. Direct WebFetch on the Curtin URL, the ACSES syndication, the Curtin research subdomain, the Mirage News reprint, and the Wayback Machine all returned 403 from this sandbox. The synthesis below is reconstructed from WebSearch snippets and from secondary coverage (Leon Furze’s “Schools to Universities” unpacking post, the GenAI:N3 summary, and a UNIL commentary). One verbatim quote is reliably attested across multiple snippets; remaining attributions are paraphrased and marked as such. The underlying framework PDF should be retrieved and re-synthesised if higher fidelity is needed.
Headline message
The Australian Centre for Student Equity and Success (ACSES), hosted at Curtin University, has published a national framework that pitches itself as the higher-education counterpart to the existing schools framework. It is built around seven guiding principles and pitched as a roadmap for “ethical, equitable, and effective” deployment of AI — explicitly including generative and agentic AI — across Australian universities. The central editorial choice is to lead with equity: lead author Jason Lodge (UQ) frames the document around the risk that AI integration will amplify existing digital divides if institutions do not act collectively.
Key takeouts
- Sector-wide rather than institutional. Lodge’s central argument in the press release is that AI challenges cannot be solved by any single university and that the sector must “share the responsibility of AI innovation rather than competing.” The framework is positioned as the artefact that lets that happen.
- Seven principles, not three. The framework’s structure is: (1) human-centred education, (2) inclusive implementation, (3) ethical decision-making, (4) Indigenous knowledges, (5) ethical development, (6) adaptive skills, (7) evidence-informed innovation. Equity is explicitly the connective thread.
- Indigenous knowledges as a standalone principle. Not bundled inside “inclusion” or “ethics” — given its own pillar, covering Indigenous data sovereignty and the right of Indigenous peoples to control how cultural heritage and knowledge are represented in AI systems. This is a notable structural choice relative to comparable international frameworks.
- Ethical decision-making extends FATE with contestability. The framework builds on the well-known Fairness / Accountability / Transparency / Ethics formulation by adding contestability — i.e. students and staff need a route to challenge AI-driven decisions, not just visibility of them.
- Adaptive skills over prompt engineering. The framework treats narrow technical proficiency (e.g. “prompt engineering”) as low-durability and instead prioritises students’ ability to monitor and adapt their own learning. Stated rationale: training in current tool craft is unlikely to age well over five years; critical judgement and reflexivity will.
- Evidence-informed innovation creates a reporting expectation. Institutions are expected not merely to deploy AI but to evaluate and share results, contributing to a sector knowledge base. This pushes responsibility for evidence upstream into the institution rather than waiting for external research.
- Continuity with the schools framework. The work is explicitly aligned with the Australian Framework for Generative Artificial Intelligence in Schools and is endorsed by the Australian National AI in Schools Taskforce — i.e. a deliberate K–12 → tertiary handover, not a parallel track.
Wider context
The Australian higher-education sector has been operating without a sector-wide AI framework since ChatGPT’s late-2022 release. Universities have produced their own AI policies (Curtin’s own AI fact sheet, UQ guidance, etc.), and a few governance pieces have looked specifically at research integrity — e.g. the Macquarie-led A University Framework for the Responsible Use of Generative AI in Research (Smith et al., 2024) — but the new ACSES document is the first to claim sector-wide scope across teaching, research and operations.
Two contextual contrasts are worth noting:
- Versus financial-services AI governance. APRA’s 2026-04-30 letter to industry shifted Australian regulated entities to active supervision of AI risk (see
[[2026-05-08-apra-ai-governance]]). The ACSES framework is by contrast voluntary, sector-led and unenforced. There is no regulator standing behind it; adherence depends on institutional self-binding and reputational pressure. The framework is best understood as a coordination device, not a compliance instrument. - Versus the schools framework. The 2023 Australian Framework for Generative AI in Schools (published by the federal Department of Education) was government-led, with state and territory buy-in built in. The higher-ed version comes from a Curtin-hosted research centre with co-authors from six universities — institutionally lighter, but with the credibility of being researcher-authored rather than top-down.
- Versus international peers. Comparable frameworks (UNESCO, AACSB’s Human-Centric AI-First Teaching, CSU’s ETHICAL principles) generally do not give Indigenous data sovereignty its own pillar, nor explicitly emphasise contestability. Both are recognisable as deliberately Australian editorial choices.
Section-by-section breakdown
1. Publisher and authorship
The framework is published by ACSES (Curtin University) in collaboration with the Australian National AI in Schools Taskforce. Lead author Professor Jason Lodge is at the University of Queensland; co-authors are Professor Matt Bower (Macquarie), Professor Kalervo Gulson (Sydney), Professor Michael Henderson (Monash), Associate Professor Christine Slade (UQ) and Associate Professor Erica Southgate (Newcastle). PDF citation: Lodge et al. (2025), Australian Framework for Artificial Intelligence in Higher Education. The Curtin media release was issued 8 December 2025.
2. The equity framing
The single verbatim quote attested across multiple snippet sources is Lodge’s: “Our central focus with this Framework is equity: we cannot allow AI integration to amplify existing digital divides.” This is the editorial line the media release leads with. Paraphrased: Lodge argues the sector must collaborate on AI rather than compete on it.
ACSES Research and Policy Program Director Professor Ian Li is also quoted (paraphrased in snippets) as positioning the framework as the foundation for the industry collaboration needed to “reap the educational benefits of AI and avoid its pitfalls.”
3. The seven guiding principles
- Human-centred education — treat AI use with caution; prioritise human connection, critical thinking and equity. AI is positioned as an augmentation of teaching relationships, not a replacement.
- Inclusive implementation — explicit attention to equity-bearing groups and intersectionality; regular intersectional impact assessments; meaningful alternatives must exist for students who cannot, do not wish to, or conscientiously object to using particular AI tools.
- Ethical decision-making — extends the FATE (Fairness, Accountability, Transparency, Ethics) frame by adding contestability as a fifth dimension. Decisions made or shaped by AI must be open to challenge.
- Indigenous knowledges — recognition of Indigenous data sovereignty; affirms the right of Indigenous peoples to maintain, control, protect and develop cultural heritage and knowledge, including how these are represented within AI systems. Includes a “two-way learning” framing.
- Ethical development — covers how AI systems are built and procured, including stakeholder involvement (government, academic staff, students, researchers) in policy development.
- Adaptive skills — prioritises students’ ability to monitor and adapt their own learning over narrow technical proficiency. The framework’s stated wager is that prompt-craft will not age well; reflexivity will.
- Evidence-informed innovation — implementation decisions should be grounded in research evidence, with institutions expected to conduct and share evaluations of their AI implementations.
4. Scope: generative AND agentic AI
The framework explicitly includes both generative and agentic AI within its scope. This matters because most existing university AI policies were drafted in response to generative AI (chatbots, image models) and are silent on autonomous agents that take actions in institutional systems. By naming agentic AI explicitly, the framework pulls forward a wave of governance work that most institutions have not yet started.
5. Sector-collaboration thesis
Both Lodge’s and Li’s quoted framings centre on the same claim: this is too hard for individual institutions, and competition between them on AI is counter-productive. The framework is the artefact that lets that collaboration cohere — common vocabulary, common principles, common reporting expectations under “evidence-informed innovation”.
Action implications / open questions
- For an Australian university board or academic board: the framework gives a defensible, peer-authored set of seven principles to align institutional AI policy against. The fastest concrete action is gap-mapping existing policy onto the seven pillars, especially Indigenous knowledges (often missing) and contestability (often missing within “ethics” sections).
- For a CISO / CIO in higher ed: the agentic-AI scope and the “default-on” branch of the inclusive-implementation principle (meaningful alternatives) imply concrete control work — audit which AI features are silently enabled in vendor SaaS used by students, and whether opt-out is actually available. (Direct connection to the embedded-AI thesis in
[[2026-05-10-vizza-chrome-silent-llm]]: AI shipped via vendor updates is also an equity question, not just a security one.) - For researchers: “evidence-informed innovation” is a publication and reporting expectation, not a slogan. Expect (and plan for) calls from ACSES or the Taskforce to share evaluation data.
- For Indigenous data governance practitioners: the standalone Indigenous knowledges pillar is unusually strong by international standards. The open question is what enforcement looks like in practice when an institution adopts a US-built model trained on uncontrolled web data.
- Open question — enforceability: the framework is voluntary. Without funding-body or TEQSA endorsement, adoption will be uneven. Watch whether the Department of Education or TEQSA references it in subsequent guidance.
- Open question — relationship to research-integrity governance: the framework references research integrity but does not supersede instruments like the University Framework for the Responsible Use of Generative AI in Research (Smith et al., 2024). Universities will need to reconcile multiple overlapping frames.
- Open question — funding for “meaningful alternatives”. If a student conscientiously objects to AI tools, providing a parallel non-AI pathway is expensive. The framework asserts the principle; it does not solve the resourcing problem.
Links
- Topic dossier: ai-higher-ed-au
- Entities: acses · jason-lodge
- Related instruments / prior sources:
- Australian Framework for Generative Artificial Intelligence in Schools (Department of Education, 2023) — the schools precursor this framework explicitly aligns with.
- A University Framework for the Responsible Use of Generative AI in Research (Smith, Tate, Freeman et al., 2024) — Macquarie-led research-integrity instrument that overlaps but does not duplicate.
[[2026-05-08-apra-ai-governance]]— contrast: regulated, enforced sectoral AI governance.[[2026-05-10-vizza-chrome-silent-llm]]— embedded / default-on AI, relevant to the inclusive-implementation principle’s “meaningful alternatives” requirement.