Use of ChatGPT, Google Bard and other generative artificial intelligence (AI) tools and services is growing rapidly within higher education, including at UW–Madison. Although AI offers new and powerful capabilities for research and education, it is clear it also poses a potential risk to institutional data that UW–Madison is legally and ethically obligated to protect.
University faculty, staff, students and affiliates must not enter institutional data into any generative AI tool or service unless that data is classified as public (low risk). Providing any data to generative AI tools or services as part of a query is equivalent to posting the data on a public-facing website. That is because generative AI “learns” by collecting and storing user-provided data. This data may then be used as output provided to others.
Use of generative AI by UW–Madison faculty, staff, students and affiliates is subject to UW–Madison, UW System Administration (UWSA) and UW System Regent policies. Examples of generative AI use prohibited by these policies include, but are not limited to, the following:
- Entering any sensitive, restricted or otherwise protected data – including hard-coded passwords – into any generative AI tool or service (see UW-523 Institutional Data and SYS 1031 Data Classification and Protection);
- Using AI-generated code for institutional IT systems or services without review by a human to verify the absence of malicious elements (see UW-503 Cybersecurity Risk Management);
- Using generative AI to violate laws; institutional policies, rules or guidelines; or agreements or contracts (see Regent Policy 25-3 Acceptable Use of Information Technology Resources).
In addition to violating UW policies, some of the above uses may also violate generative AI providers’ policies and terms.
Researchers who may need to enter higher risk data into a generative AI tool or service as part of their research programs should consult the policies referenced above. For additional guidance, researchers may contact Chief Technology Officer (CTO) Todd Shechter (firstname.lastname@example.org).
For uses of generative AI that are not prohibited, UW–Madison faculty, staff, students and affiliates can help protect themselves and others by choosing tools and services that exhibit the National Institute of Standards and Technology’s (NIST’s) characteristics of trustworthy AI.
You can learn more about the benefits and risks of using ChatGPT and other generative AI tools by attending the summer 2023 “Exploring Artificial Intelligence @ UW–Madison” webinar series. Sponsored by the Division of Information Technology (DoIT) and the Data Science Institute, the series aims to provide a platform for experts and visionaries in the field of AI. Speakers will share their insights, research and experiences in the classroom, research lab and wider academic community.
Also instructive is the “Generative AI @ UW–Madison: Use & Policies” page.
If you have questions about classifying data, contact the relevant data steward.
Chief Information Security Officer
University of Wisconsin—Madison