AI Assurance Framework

Your guide to using and operating AI systems.
Personalise
Software developer freelancer woman female in glasses work with program code C++ Java Javascript on wide displays at night Develops new web desktop mobile application or framework Projector background

What is the AI Assurance Framework?

The AI Assurance Framework aims to establish a centralised self-assessment approach AI deployment or development. The framework has been developed for use when considering the use of an AI system. It is intended to provide guidance when designing, building and operating AI systems.

When contemplating the deployment of a particular AI technology, each of the 7 parts of the self-assurance framework should be used to guide decision making. This will support UNSW to innovate using AI systems safely and in accordance with the Ethical and Responsible Use of Artificial Intelligence at UNSW. Each part has an explanatory overview and key questions that should be considered. 

The AI Assurance Framework should be used to consider whether to proceed with the development or deployment of an AI system.

The Self-Assessment checklist, opens in a new window is a tool which individuals can use to guide them through the AI Assurance Framework to support compliance with UNSW policies and principles when developing AI tools or creating custom GPTs. It can help you understand whether the tool that you are creating comply with the assurance framework and principles of AI that we have at the university. The checklist, opens in a new window will help provide you with advice and guidance on whether your tool is remaining within the boundaries of UNSW policies.

What factors should you consider in the assurance framework?

By using advanced security features, Microsoft Copilot maintains the confidentiality and integrity of sensitive business information. It adheres to compliance standards, safeguards against unauthorised access and implements industry-leading encryption methods to protect your commercial data.

AI in context: Real world examples and scenarios

Scenario 1

If an AI system or machine learning makes a quantitative prediction about a student’s likelihood of failing a particular unit based on assessing a range of activities and results. This prediction value creates a new piece of personal information about the student.

In this case, the key elements which need to be considered include:

  • Sensitive data
  • Fairness
  • Harm
  • Transparency & accountability

Scenario 2

When an AI tool makes decisions about a person and action is taken as a result of the decision, and the decision process has no human element to oversee the decision, this would not be considered appropriate.  An assessment may be made based on data matching from different sources, and there is a good reason for the discrepancy in data. In this case, the rationale for the discrepancy would need to be reviewed by a human, and an assessment made as to whether the decision made and the corresponding action proposed is valid.

In this case, the key elements which need to be considered include:

  • Sensitive data
  • Fairness
  • Harm
  • Compliance
  • Transparency and accountability

Scenario 3

When students are using AI tools to assist (partly or even fully) in completing an assessment task, the onus is placed upon those marking the assessments. Assessors are given a fixed timeframe allocated to review and grade assessment tasks. Assessors can usually ascertain when AI has been used however, the assessor would typically need to pay extremely close attention to identify the indicators that AI was used, perhaps outside of the relevant School’s policies. Even if assessors have been provided with appropriate training to undertake this task, is this a fair requirement to be placed on assessors without review of the current marking constraints? Failure to identify the use of AI means potential (and serious) student conduct events can be missed, which may lead to more instances.

In this case, the key elements which need to be considered include:

  • Fairness
  • Harm
  • Compliance
  • Transparency and accountability

 

Where can I find further information and learning?