Version 3, published August 23, 2024
Use of Anthropic Claude, OpenAI ChatGPT, Microsoft Copilot, Google Gemini (formerly known as Bard) and other generative artificial intelligence (AI) tools and services is growing rapidly within higher education, including at UW–Madison. Although AI offers new and powerful capabilities for research and education, it is clear it also poses a potential risk to institutional data that UW–Madison is legally and ethically obligated to protect.
Entering data into most non-enterprise generative AI tools or services is like posting that data on a public website. By design, these AI tools collect and store data from users as part of their learning process. Any data you enter into such a tool could become part of its training data, which it may then share with other users outside the university. For this reason, entering institutional data into AI tools may put the university and individuals at risk.
Use of generative AI by UW–Madison faculty, staff, students and affiliates is subject to UW–Madison, UW System Administration (UWSA) and Universities of Wisconsin Regent policies.
Unless otherwise prohibited by policy, university faculty, staff, students and affiliates may enter institutional data into generative AI tools or services when:
- The information is classified as public (low risk) OR
- The AI tool or service being used has undergone appropriate internal review and residual risk has been accepted by the responsible Risk Executive. Required reviews may include but are not limited to:
- Cybersecurity risk management (per UW-503 and the Cybersecurity Risk Management Implementation Plan)
- Data governance
- Data privacy
- Accessibility
- Purchasing
- Institutional review board (IRB)
Uses of generative AI that are explicitly prohibited by policy include, but are not limited to, the following:
- Entering any sensitive, restricted or otherwise protected institutional data – including hard-coded passwords – into any generative AI tool or service (see UW-523 Institutional Data and SYS 1031 Data Classification and Protection);
- Using AI-generated code for institutional IT systems or services without review by a human to verify the absence of malicious elements (see UW-503 Cybersecurity Risk Management);
- Using generative AI to violate laws; institutional policies, rules or guidelines; or agreements or contracts (see Regent Policy 25-3 Acceptable Use of Information Technology Resources).
In addition to violating UW policies, some of the above uses may also violate generative AI providers’ policies and terms.
Researchers who may need to enter any data other than public (low risk) into a generative AI tool or service as part of their research programs should consult the policies referenced above. For additional guidance, researchers may contact Chief Technology Officer (CTO) Todd Shechter (todd.shechter@wisc.edu).
For uses of generative AI that are not prohibited, UW–Madison faculty, staff, students and affiliates can help protect themselves and others by choosing tools and services that exhibit the National Institute of Standards and Technology’s (NIST’s) characteristics of trustworthy AI.
You can learn more about the benefits and risks of using ChatGPT and other generative AI tools by reviewing recordings of the summer 2023 “Exploring Artificial Intelligence @ UW–Madison” webinar series. Sponsored by the Division of Information Technology (DoIT) and the Data Science Institute, the series provided experts and visionaries in the field of AI an opportunity to share their insights, research and experiences in the classroom, research lab and wider academic community.
Also instructive is the “Generative AI @ UW–Madison: Use & Policies” page.
If you have questions about classifying data, contact the relevant data steward.
Thank you,
Jeffrey Savoy
Chief Information Security Officer
University of Wisconsin—Madison