After fighting to get a policy in our school for well over a year, we finally rolled out one yesterday arvo. Below is the relevant snip from our policy doc:
<start>
Guidelines for the responsible use of Generative Artificial Intelligence Services
Generative Artificial Intelligence (AI) tools (such as Chat GPT, Microsoft’s Bing, and Google’s Bard) are covered by the Privacy Act 2020. Concerns raised by the office of the New Zealand privacy commissioner regarding the use of AI tools means it is important to include safeguards in our social media policy to ensure the protection of privacy and to mitigate potential risks. Before you use AI tools, please read the Ministry of Education and Privacy Commissioner's Guidance on Generative AI
In summary, AI tools pose privacy risks, particularly as they are models that learn from the data that they collect.
1. Transparency: All content produced through any AI tool must be clearly labeled as such, and the use of AI in generating documents should be transparent.
2. Data privacy: ensure that any personal or sensitive data is handled with appropriate care and that any information processed through AI tools is done so in compliance with the Privacy Act 2020 and any other relevant legislation. Where possible, personal and sensitive data should not be used.
3. Fairness: ensure that AI-generated content does not discriminate against any individual based on their protected characteristics, such as race, gender, age, or disability etc.
4. There is a risk that information could be exposed or misused, either through a security breach or by unintended parties gaining access. Staff are not to enter or use proprietary data or confidential information, including students information in AI tools. While such tools are sophisticated and can provide helpful insights, they are not inherently equipped to handle sensitive data.
In summary, staff are required to exercise caution when using AI tools. Sharing proprietary and confidential information with AI poses security, confidentiality, and privacy risks.
<end>