Document Feedback - Review and Comment
Step 1 of 4: Comment on Document
How to make a comment?
1. Use this to open a comment box for your chosen Section, Part, Heading or clause.
2. Type your feedback into the comments box and then click "save comment" button located in the lower-right of the comment box.
3. Do not open more than one comment box at the same time.
4. When you have finished making comments proceed to the next stage by clicking on the "Continue to Step 2" button at the very bottom of this page.
Important Information
During the comment process you are connected to a database. Like internet banking, the session that connects you to the database may time-out due to inactivity. If you do not have JavaScript running you will recieve a message to advise you of the length of time before the time-out. If you have JavaScript enabled, the time-out is lengthy and should not cause difficulty, however you should note the following tips to avoid losing your comments or corrupting your entries:
-
DO NOT jump between web pages/applications while logging comments.
-
DO NOT log comments for more than one document at a time. Complete and submit all comments for one document before commenting on another.
-
DO NOT leave your submission half way through. If you need to take a break, submit your current set of comments. The system will email you a copy of your comments so you can identify where you were up to and add to them later.
-
DO NOT exit from the interface until you have completed all three stages of the submission process.
(1) This Policy outlines the principles for using, developing, and managing Artificial Intelligence (AI) solutions at La Trobe University. The goal is to ensure that AI is used in alignment to our strategy and commitment to responsible AI implementation and adoption. (2) AI is a powerful technology that can benefit the University and its community. However, it's important to use AI in a way that is safe, ethical, and complies with data privacy and protection laws. The University’s approach to AI demonstrates our Cultural Qualities: (3) This Policy applies to all University campuses, (4) Whilst the Policy applies to the processes supporting research activities, it does not apply to the Research or its outcomes. Research operates under the University’s Research Governance Policy, Research Governance Framework, specific standards, contractual requirements, and the Australian Code for the Responsible Conduct of Research (2018). (5) The University is committed to using AI in a responsible manner that accords with Australia’s AI Ethics Principles, the voluntary framework published by the Australian Government’s Department of Industry, Science and Resources: (6) The University’s AI governance will adopt a risk-based approach to ensure that obligations and oversight are proportionate to the risk posed. AI uses and systems will be classified into risk categories, based on their potential to violate individual’s rights: (7) The University supports the proposed voluntary guardrails outlined in the Voluntary AI Safety Standard. We will ensure that our AI systems are developed and used responsibly, with a focus on accountability, risk management, data governance, and human oversight. (8) The University’s Code of Conduct (the Code) outlines the ethical, professional and legal standards used as the basis of decisions and actions. Consistent with the Code, members of the University Community are individually accountable for their actions, including their own use of AI. Individuals must ensure they adhere to University Policies, including the Information Security Policy, Code of Conduct, Data Governance Policy, Privacy Policy and Records Management Policy and there is shared accountability for upholding compliance with legislation and University policies. (9) AI must only be used for its approved purpose. Any use of an AI system outside of its approved purpose must be assessed separately in accordance with this Policy. (10) Ensuring trust is a foundational aspect of La Trobe’s AI approach, recognising that: (11) The University’s adoption, ethical application and implementation is governed through its Responsible AI Adoption Committee (RAAC). The RAAC is responsible for ensuring AI projects and initiatives align with University strategy, promote good practices and that risks are managed to deliver value. The RAAC reports into the Senior Executive Group (SEG) and endorsement from the RAAC is required for all new AI Use Cases. The RAAC may delegate endorsement for low-risk use cases as necessary to support operational requirements. (12) All uses of AI will have a nominated business owner and a report of approved AI systems and applications will be provided to the Senior Executive Group annually. (13) Education, including building AI awareness, literacy and user capability through training, tools, and guidelines is essential to ensure principles and requirements are applied systemically. AI Business owners must ensure that roles responsible for implementing or managing AI have sufficient expertise or receive appropriate training to enable them to apply this Policy. (14) All Institutional Data is managed within the University’s Data Governance Policy and Framework (under developent). Data and its use are subject to legislative requirements as outlined in the Framework. In addition to these requirements, the use of AI brings ethical considerations, particularly where it is to be used in making judgments or decisions that involve humans. The University outlines the expectations of its staff in the La Trobe Code of Conduct. (15) All requests to use AI with existing datasets or to migrate additional data sets into data repositories should be initiated using the online enquiry function accessible through the intranet. (16) Requests to implement technology that incorporates AI capabilities should be directed to the IS team through the Ask ICT function. (17) Review through the RAAC forms an input to the University approval processes. The AI Business owner is responsible for submitting requests to use AI to the Responsible AI team. (18) An AI Risk Assessment must be completed for all new uses of AI other than those classified as low risk. Depending on the data sets used and risk aspects of the usage, a Privacy Impact Assessment may also be required. This process also includes proposals to provide University data to a third-party for use in an AI application. (19) A central record of limited and high risk AI applications, the RAAC assessment, and the ethics review for high-risk applications is to be maintained by the Information Services (IS) Division through their AI Accelerator function. Risk will be managed in accordance with the University’s Risk Management Framework, including identifying appropriate controls. Potential controls for higher-risk applications include evaluation of algorithms and data sets for bias and accuracy, and auditing AI activities and outcomes. All high-risk applications will be reviewed annually for continued alignment with the principles. Uses that can be demonstrated as still required and compliant will be retained, with the remaining uses disassembled and terminated. (20) Any complaints, concerns or risks relating to the ethical use of AI at La Trobe should be reported to Legal Services. Data breaches should be reported using the University’s data breach process. (21) For the purpose of this policy and procedure: (22) This Policy is made under the La Trobe University Act 2009.Responsible AI Adoption Policy
Section 1 - Key Information
Top of Page
Policy Type and Approval Body
Administrative – Vice-Chancellor
Accountable Executive – Policy
Chief Operating Officer
Responsible Manager – Policy
Chief Commercial Officer
Review Date
14 November 2027
Section 2 - Purpose
Top of PageSection 3 - Scope
Section 4 - Key Decisions
Top of Page
Key Decisions
Role
Endorse the implementation of AI use cases
Responsible AI Adoption Committee (RAAC)
Approval to implement AI
In accordance with University Delegations and governance frameworks
Section 5 - Policy Statement
The University will not support AI uses that breach Human Rights.
Category
Description
General examples
Low
Spam filters, video games, using generative AI applications for general tasks involving no sensitive, personal or confidential University data
Medium
Chat bots, generated images (picture, voice, video)
High
AI assessment and shortlisting of job candidates, systems to evaluate learning outcomes, triage applications for health and welfare services
Unacceptable – will not be pursued
Untargeted scraping of facial images from CCTV footage, behavioural manipulation that causes harm
Section 6 - Procedures
Section 7 - Definitions
Top of PageSection 8 - Authority and Associated Information