Scammers are always looking for new ways to trick people, and generative artificial intelligence (AI) technology is giving them powerful new tools to do so at a larger scale than ever before. Since 2020, phishing and scam activity has increased 95%, with millions of new scam pages popping up every month, according to Bolster.ai. Some estimate the losses from these AI-powered scams will reach more than $10 trillion worldwide by 2025.
Here’s what this new reality means for you—and some steps you can take to protect yourself and your loved ones:
How are scammers using AI?
“What we are seeing is AI automating or ‘supercharging’ a lot of the same techniques that scammers are already using, including making possible some new attacks,” says Dave Schroeder, UW–Madison national security research strategist. “Scammers essentially use AI as a job aid or an additional tool—just like many of us do.”
Some common tactics scammers are using with the help of generative AI include:
Voice cloning: One of the most alarming new scams uses AI to clone voices. Scammers only need a short audio clip of someone’s voice to create a convincing fake. They then use the cloned voice to impersonate a family member in distress, claiming they need money urgently.
“Imagine a situation where a ‘family member’ calls from what appears to be their phone number and says they have been kidnapped, and then the ‘kidnapper’ gets on the line and gives urgent instructions,” Schroeder explains. “Victims of these scams have said they were sure it was their family member’s voice.”
Deepfakes: AI can also create fake photos or videos that prey on your emotions and can look incredibly real. Scammers may use these to impersonate public figures or create fake charity appeals after disasters.
Phishing: The days of the classic “Nigerian Scam”—relatively easy-to-spot emails riddled with misspellings and grammar mistakes—are mostly over. Today, generative AI helps scammers craft much more convincing phishing emails and fake websites. These might appear to be from your bank, your favorite shopping site, or even your friendly neighborhood Help Desk.
Spear phishing: Scammers can use AI tools to analyze your online and social media presence to help them create highly personalized “spear phishing” attacks. They use your personal information for sophisticated social engineering, including romance scams.
(Related: UW–Madison Cybersecurity Awareness Training)
How to spot AI-powered scams
Allen Monette, associate director for cybersecurity operations in the Division of Information Technology (DoIT), notes that AI is “making it hard to tell when something is a fake just based on the content itself.” However, there are still ways to spot potential AI scams. Here are some telltale signs:
- Urgency: Scammers often pressure you to act immediately.
- Unusual requests: Be wary if someone asks you to send money or gift cards or share sensitive information unexpectedly.
- Strange phrasing: AI-generated content may still use odd word choices or unnatural language.
- Unnatural details: Look and listen closely for things like unusual background noises, strange facial or hand movements, inconsistent lighting and shadows, and unnatural speed changes.
- “Off” feeling: Trust your instincts if something feels wrong about the interaction.
Protecting yourself and your loved ones
The good news is that you can protect yourself using many of the same strategies that work for regular scams. Here are some strategies to keep in mind:
- Be prepared: Educate yourself and your family about scams. Pick a code word that only your family knows to help confirm identities if you receive an unexpected call, text or email.
- Be careful what you share: Be cautious about what personal information you share online. Scammers can use personal details from your life as leverage points.
- Slow down: Don’t let yourself get caught up in the scammer’s false sense of urgency. Take time to think critically and ask questions.
- Verify, verify, verify: Use a trusted number or email address—not the one that contacted you—to confirm who contacted you.
- Trust your intuition: If something feels “off” or wrong, it probably is.
What to do if you suspect an AI scam
- Stop engaging with the suspected scammer. Hang up. Don’t reply to that text or email.
- Contact the real person or organization directly using trusted contact information.
- UW students and employees should use the “Report Suspicious” button in Outlook to report suspicious emails.
- Contact the local police if you or someone you know is victimized by a scam.
- You can also report fraud to the Federal Trade Commission.
The future of AI scams
As AI technology advances, these scams will likely become even more sophisticated. “When a threat actor can now make an AI-generated video of an event that never happened—with no quick or easy way to verify it—and amplify that through AI-enabled bot networks on social media in minutes, and do that globally, at scale, it breaks the fabric of a society based on trust,” Schroeder warns.
Our best defense is awareness and skepticism. By staying informed about these tactics and verifying unexpected requests, we can protect ourselves and our communities from AI-powered scams.
Related links
- How to avoid a scam | Federal Trade Commission
- What to know about AI scams and how to help protect your assets | Wells Fargo
- What are AI scams and how do you stop them? | Sift
- What you need to know about artificial intelligence scams | City of New York
- Scammers use AI to enhance their family emergency schemes | Consumer Advice
- 2024 state of phishing & online scams: Statistics, facts, trends & recommendations | Bolster.ai
- How cybercriminals are using gen AI to scale their scams | Okta