(1) This Policy outlines the principles for using, developing, and managing Artificial Intelligence (AI) solutions at La Trobe University. The goal is to ensure that AI is used in alignment to our strategy and commitment to responsible AI implementation and adoption. (2) AI is a powerful technology that can benefit the University and its community. However, it's important to use AI in a way that is safe, ethical, and complies with data privacy and protection laws. The University’s approach to AI demonstrates our Cultural Qualities: (3) This Policy applies to all University campuses, (4) Whilst the Policy applies to the processes supporting research activities, it does not apply to the Research or its outcomes. Research operates under the University’s Research Governance Policy, Research Governance Framework, specific standards, contractual requirements, and the Australian Code for the Responsible Conduct of Research (2018). (5) The University is committed to using AI in a responsible manner that accords with Australia’s AI Ethics Principles, the voluntary framework published by the Australian Government’s Department of Industry, Science and Resources: (6) The University’s AI governance will adopt a risk-based approach to ensure that obligations and oversight are proportionate to the risk posed. AI uses and systems will be classified into risk categories, based on their potential to violate individual’s rights: (7) The University supports the proposed voluntary guardrails outlined in the Voluntary AI Safety Standard. We will ensure that our AI systems are developed and used responsibly, with a focus on accountability, risk management, data governance, and human oversight. (8) The University’s Code of Conduct (the Code) outlines the ethical, professional and legal standards used as the basis of decisions and actions. Consistent with the Code, members of the University Community are individually accountable for their actions, including their own use of AI. Individuals must ensure they adhere to University Policies, including the Information Security Policy, Code of Conduct, Data Governance Policy, Privacy Policy and Records Management Policy and there is shared accountability for upholding compliance with legislation and University policies. (9) AI must only be used for its approved purpose. Any use of an AI system outside of its approved purpose must be assessed separately in accordance with this Policy. (10) Ensuring trust is a foundational aspect of La Trobe’s AI approach, recognising that: (11) The University’s adoption, ethical application and implementation is governed through its Responsible AI Adoption Committee (RAAC). The RAAC is responsible for ensuring AI projects and initiatives align with University strategy, promote good practices and that risks are managed to deliver value. The RAAC reports into the Senior Executive Group (SEG) and endorsement from the RAAC is required for all new AI Use Cases. The RAAC may delegate endorsement for low-risk use cases as necessary to support operational requirements. (12) All uses of AI will have a nominated business owner and a report of approved AI systems and applications will be provided to the Senior Executive Group annually. (13) Education, including building AI awareness, literacy and user capability through training, tools, and guidelines is essential to ensure principles and requirements are applied systemically. AI Business owners must ensure that roles responsible for implementing or managing AI have sufficient expertise or receive appropriate training to enable them to apply this Policy. (14) All Institutional Data is managed within the University’s Data Governance Policy and Framework (under developent). Data and its use are subject to legislative requirements as outlined in the Framework. In addition to these requirements, the use of AI brings ethical considerations, particularly where it is to be used in making judgments or decisions that involve humans. The University outlines the expectations of its staff in the La Trobe Code of Conduct. (15) All requests to use AI with existing datasets or to migrate additional data sets into data repositories should be initiated using the online enquiry function accessible through the intranet. (16) Requests to implement technology that incorporates AI capabilities should be directed to the IS team through the Ask ICT function. (17) Review through the RAAC forms an input to the University approval processes. The AI Business owner is responsible for submitting requests to use AI to the Responsible AI team. (18) An AI Risk Assessment must be completed for all new uses of AI other than those classified as low risk. Depending on the data sets used and risk aspects of the usage, a Privacy Impact Assessment may also be required. This process also includes proposals to provide University data to a third-party for use in an AI application. (19) A central record of limited and high risk AI applications, the RAAC assessment, and the ethics review for high-risk applications is to be maintained by the Information Services (IS) Division through their AI Accelerator function. Risk will be managed in accordance with the University’s Risk Management Framework, including identifying appropriate controls. Potential controls for higher-risk applications include evaluation of algorithms and data sets for bias and accuracy, and auditing AI activities and outcomes. All high-risk applications will be reviewed annually for continued alignment with the principles. Uses that can be demonstrated as still required and compliant will be retained, with the remaining uses disassembled and terminated. (20) Any complaints, concerns or risks relating to the ethical use of AI at La Trobe should be reported to Legal Services. Data breaches should be reported using the University’s data breach process. (21) For the purpose of this policy and procedure: (22) This Policy is made under the La Trobe University Act 2009. (23) Associated information includes:Responsible AI Adoption Policy
Section 1 - Key Information
Top of Page
Policy Type and Approval Body
Administrative – Vice-Chancellor
Accountable Executive – Policy
Chief Operating Officer
Responsible Manager – Policy
Chief Commercial Officer
Review Date
14 November 2027
Section 2 - Purpose
Top of PageSection 3 - Scope
Section 4 - Key Decisions
Top of Page
Key Decisions
Role
Endorse the implementation of AI use cases
Responsible AI Adoption Committee (RAAC)
Approval to implement AI
In accordance with University Delegations and governance frameworks
Section 5 - Policy Statement
The University will not support AI uses that breach Human Rights.
Category
Description
General examples
Low
Spam filters, video games, using generative AI applications for general tasks involving no sensitive, personal or confidential University data
Medium
Chat bots, generated images (picture, voice, video)
High
AI assessment and shortlisting of job candidates, systems to evaluate learning outcomes, triage applications for health and welfare services
Unacceptable – will not be pursued
Untargeted scraping of facial images from CCTV footage, behavioural manipulation that causes harm
Section 6 - Procedures
Section 7 - Definitions
Top of PageSection 8 - Authority and Associated Information
View Document
This is the current version of this document. You can provide feedback on this policy to the document author - refer to the Status and Details on the document's navigation bar.
Presents a limited risk to individuals; AND
Does not involve sensitive, personal or confidential information
Applications with limited transparency, such that individuals may not realise they are engaging with AI; AND
Does not involve sensitive, personal or confidential information
Could have a detrimental impact on health and safety or affect access to a fundamental right such as education, employment or justice; OR
Uses sensitive, personal or confidential information and the use complies with relevant University policies
Techniques that may cause significant harm; OR
Uses sensitive, personal or confidential information in a way that does not comply with University policies