Or, There’s No Such Thing as a Free Lunch
As generative AI tools like ChatGPT and Microsoft Copilot become more common, it’s important to understand what happens to the information that you enter into these tools. As part of Data Privacy Week, the Office of Legal Affairs is helping raise awareness so you can protect privacy (your own, colleagues’ and students’) and intellectual property as well as campus data security.
Generative AI companies have an insatiable appetite for information – text, images, and more – that they can use to train their AI tools. Companies that make these tools available for free generally do so because they store and use all of the information being fed into the tool to help train it. This allows them to profit from the information you provide and may result in the AI tool sharing your information with other users.When you enter information into a free AI tool, chances are that information is no more protected than if you posted it on a public website or handed it out to passersby on the street.
The good news is that there are a number of AI tools, including Copilot, that have been vetted and approved by the Division of Information Technology (DoIT) as enterprise-level AI tools. These tools provide data security and contractual protections that free AI tools don’t. DoIT’s Generative AI services at UW–Madison page is a great one-stop shop to learn more about approved tools and university guidance on their use.
Even for these approved tools, some restrictions apply depending on the type of data you are using. It’s your obligation to educate yourself about what types of information can be used with the approved tools – consulting the university’s statement on generative AI use and its four levels of data classification are good places to start. If you must use an unvetted AI tool, be sure that you enter only public information, as defined by university data classification.
If you have questions, please contact the university data steward who supports your unit.