AI Assurance Framework

What is the AI Assurance Framework?
The AI Assurance Framework aims to establish a centralised self-assessment approach AI deployment or development. The framework has been developed for use when considering the use of an AI system. It is intended to provide guidance when designing, building and operating AI systems.
When contemplating the deployment of a particular AI technology, each of the 7 parts of the self-assurance framework should be used to guide decision making. This will support UNSW to innovate using AI systems safely and in accordance with the Ethical and Responsible Use of Artificial Intelligence at UNSW. Each part has an explanatory overview and key questions that should be considered.
The AI Assurance Framework should be used to consider whether to proceed with the development or deployment of an AI system.
The Self-Assessment checklist, opens in a new window is a tool which individuals can use to guide them through the AI Assurance Framework to support compliance with UNSW policies and principles when developing AI tools or creating custom GPTs. It can help you understand whether the tool that you are creating comply with the assurance framework and principles of AI that we have at the university. The checklist, opens in a new window will help provide you with advice and guidance on whether your tool is remaining within the boundaries of UNSW policies.
What factors should you consider in the assurance framework?
By using advanced security features, Microsoft Copilot maintains the confidentiality and integrity of sensitive business information. It adheres to compliance standards, safeguards against unauthorised access and implements industry-leading encryption methods to protect your commercial data.
AI in context: Real world examples and scenarios
Scenario 1
If an AI system or machine learning makes a quantitative prediction about a student’s likelihood of failing a particular unit based on assessing a range of activities and results. This prediction value creates a new piece of personal information about the student.
In this case, the key elements which need to be considered include:
- Sensitive data
- Fairness
- Harm
- Transparency & accountability
Scenario 2
When an AI tool makes decisions about a person and action is taken as a result of the decision, and the decision process has no human element to oversee the decision, this would not be considered appropriate. An assessment may be made based on data matching from different sources, and there is a good reason for the discrepancy in data. In this case, the rationale for the discrepancy would need to be reviewed by a human, and an assessment made as to whether the decision made and the corresponding action proposed is valid.
In this case, the key elements which need to be considered include:
- Sensitive data
- Fairness
- Harm
- Compliance
- Transparency and accountability
Scenario 3
When students are using AI tools to assist (partly or even fully) in completing an assessment task, the onus is placed upon those marking the assessments. Assessors are given a fixed timeframe allocated to review and grade assessment tasks. Assessors can usually ascertain when AI has been used however, the assessor would typically need to pay extremely close attention to identify the indicators that AI was used, perhaps outside of the relevant School’s policies. Even if assessors have been provided with appropriate training to undertake this task, is this a fair requirement to be placed on assessors without review of the current marking constraints? Failure to identify the use of AI means potential (and serious) student conduct events can be missed, which may lead to more instances.
In this case, the key elements which need to be considered include:
- Fairness
- Harm
- Compliance
- Transparency and accountability
Where can I find further information and learning?
-
- Visit the AI@UNSW SharePoint site for UNSW specific resources, guidance and training.
- AI in Teaching and Learning at UNSW. Developed in collaboration with UNSW teaching academics, the site provides guidance and key information to UNSW staff for the deliberate and effective adoption of AI in their teaching, learning and assessment.
- The ChatGPT & Artificial Intelligence in our teaching and learning at UNSW SharePoint site also has a range of information including the Generative AI in Learning, Teaching and Assessment at UNSW SharePoint site.
-
Microsoft Responsible AI Standard v2 General Requirements, opens in a new window
Microsoft-RAI-Impact-Assessment-Template.pdf, opens in a new window
GitHub - microsoft/responsible-ai-toolbox:, opens in a new window Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly and take better data-driven actions.
Responsible and trusted AI - Cloud Adoption Framework | Microsoft Learn, opens in a new window
Overview of Responsible use of AI - Azure AI services | Microsoft Learn, opens in a new window
Responsible Conversational AI Demo - AI Demos (microsoft.com), opens in a new window
-
Digital NSW has some great learning modules:
- A common understanding: simplified AI definitions from leading standards, opens in a new window
- Generative AI: basic guidance, opens in a new window
- Chatbot prompt essentials, opens in a new window
- Cyber Security NSW generative artificial intelligence (AI), opens in a new window
Important: Please ensure you have registered a personal email address or mobile phone number in your myUNSW, opens in a new window profile. Otherwise, you will not be able to use the UNSW Identity Manager services and will have to contact the UNSW IT Service Centre for assistance.