A message from Lois Brooks, vice provost for information technology and chief information officer:
The topic of artificial intelligence (AI) has been inescapable these last few months.
Ever since OpenAI released its ChatGPT conversational chatbot to the public last November, AI has firmly captured our society’s attention. It’s led the news, driven industry shifts, inspired incredible new creative works and raised sobering questions about what our future will look like. It can also be simply delightful, as anyone will tell you who’s seen their idea come to light before their eyes just by typing a prompt into a text box.
This has all been remarkable to watch for me as a technologist. It’s rare for a technology to come along that has the potential to revolutionize so much of how we live and work. Probably the last time I got this feeling that everything was about to change was when Steve Jobs introduced the iPhone in 2007. We didn’t know then exactly how having a powerful computer in our pockets was going to change the world, but it was clear change was coming.
And so while ChatGPT (and its ilk) has been called a “plagiarism machine,” a “lifeline” for neurodivergent people, a “Google killer,” a way to ease our loneliness epidemic, and a disruptor of the knowledge, education, software, creative and healthcare industries (among others), my take is a little more measured.
I see this new generation of powerful, publicly available AI as a new tool. Because it’s new, many of us are still learning how to use it and what it’s good for. That learning process will take time, and we should keep our minds open with both interest and healthy skepticism while we’re figuring it all out.
In many ways, AI resembles other technologies like the printing press, the personal computer, factory robotics and the internet. Those inventions massively reshaped the world—mostly for the better—by enabling people to create things that were previously unimaginable. Those new technologies were also met with apprehension and confusion early on, and it took a long time for people to figure out how the new technologies fit into society, and how society had been refashioned by the new tools.
Like any new tool, we need to approach using it with care. For all of the great potential AI has, it is also imbued with a host of new risks. AI has the potential to be the vector for a tsunami of new cybersecurity threats, including phishing attacks, spam, malware, deepfakes and embedded vulnerabilities in machine-generated code. Large language models can magnify our human failings, like xenophobia, racism, sexism, ableism, transphobia, ethnocentrism and other prejudices and biases. And many people are justifiably concerned about what advancements in AI will mean for their lives and livelihoods. That’s why even the founders of AI companies like OpenAI are asking for guardrails on their technology to help ensure they are developed safely.
For all these reasons and more, it is critical that we figure out how to use AI effectively and responsibly, holding close our commitments to ethics, safety, security and privacy while taking steps to counteract our biases and worst instincts.
We in higher education have an opportunity here to help the world understand how to use these tools safely and ethically and be prepared for the new world we’re entering. As technologists, we also have a leadership role in helping the university navigate effective and safe AI use in the classroom, office and research lab.
One way I believe we can lead is by framing AI as a tool that helps people do their work, not some kind of replacement for the people themselves. A university without people would be a pointless, soulless endeavor, as would most businesses. There are always more things we could be doing than we have the capacity to accomplish. If AI helps free us up to do more interesting and innovative work, all the better.
Above all, we should not let the risks of AI obscure our view of all the good it can do. In the last 10 years, UW–Madison researchers have used machine learning to find new blood tests for cancer, better understand the human brain, discover promising high-performance polymers, improve medical imaging, find new ways to diagnose disease, study cosmic rays, predict severe weather, simplify complex chip designs, help farmers make better decisions and much more. The work we do here supports incredible advances that help us better understand our world, improve our quality of life and even save lives.
We should approach AI with the same spirit that fuels the Wisconsin Idea—a spirit of fearless exploration, conscientious inquiry and a commitment to public service, with the aim of improving lives here in Wisconsin and around the world.
Let’s test out this new tool to see what it’s good for. And let’s be smart about it.
Best,
Lois
Related
- Statement on use of generative AI — UW–Madison CISO Jeff Savoy
- Considerations for Using AI in the Classroom – College of Letters & Science Instructional Design Collaborative
- Artificial Intelligence (AI) topic – EDUCAUSE
- Trustworthy & Responsible Artificial Intelligence Resource Center – National Institute of Standards and Technology (NIST)
- “Generative AI at Work” working paper (PDF) – National Bureau of Economic Research
- How Large Language Models Reflect Human Judgment – Harvard Business Review