As Valentine’s Day approaches, cybercriminals are taking advantage of the holiday to launch romance scams alongside traditional phishing attacks. Universities are prime targets, with scammers focusing on students, faculty, and staff. These scams can now use artificial intelligence (AI) to produce profiles, deepfake images, and voice cloning to build trust and manipulate victims.
I spoke with Illinois State University’s Deputy Chief Information Security Officer Joey Brown to ask some questions about the rise and potential dangers of AI in phishing scams. Here’s what he had to say about staying informed and protecting yourself from these evolving threats.
How will 2025 be different with phishing and romance scams in higher education?
In 2025, phishing and romance scams will become more sophisticated and personalized through AI. Attackers will leverage AI to craft highly targeted phishing emails that closely mimic university communications and also use AI-generated personas to engage in long-term online relationships for financial fraud. Additionally, we anticipate a rise in real-time AI-generated voice calls and deepfake videos impersonating professors, advisors, or even students to deceive victims.
Who is most likely to be targeted by phishing and romance scams at a university?
At universities, phishing scams commonly target faculty, staff, and students—especially those with access to financial systems, research data, or administrative privileges. Romance scams, on the other hand, frequently target students using dating apps or social media. International students, new students, and those unfamiliar with online fraud tactics may be particularly vulnerable.
What makes AI-powered phishing and romance scams more dangerous than traditional ones?
AI-powered scams are far more personalized and difficult to detect compared to traditional phishing scams. Previously, traditional phishing emails often contained typos or vague messaging. However, AI can now analyze real university emails and mimic the tone, structure, and even sender style. Attackers also use AI to scrape publicly available university data (faculty directories, research pages, student club websites) to craft personalized phishing messages. Impersonation of university officials in deepfake videos or AI cloned voice calls is also something that is anticipated to increase in 2025. This makes it even more difficult for victims to identify whether the message they are receiving is authentic or if it is a scam.
In romance scams, AI-generated personas can engage victims in long-term conversations, making emotional manipulation even more effective. These personas created by AI are often realistic-looking profiles on dating apps and social media, making romance scams harder to detect. Attackers don’t even have to put effort in engaging in conversation since the use of chatbots has become a common method of engaging in real-time conversations with victims, tricking them into sharing personal or financial information.
How do you see AI-powered phishing and romance scams evolving in higher education over the next five years?
We anticipate:
- More deepfake impersonations: Attackers may use AI to fake live video calls from “professors” or “advisors” to scam students.
- AI-driven academic scams: Fake research collaborations, fraudulent conference invitations, and scam scholarship offers will rise.
- Automated AI chatbots: Attackers may deploy chatbots to engage victims in real-time scams through email, social media, and text messages.
- More targeted faculty attacks: Research grants, payroll, and intellectual property theft will be big targets.
What are the biggest challenges universities face in fighting AI-generated cybercrime?
The biggest challenges we see are:
- Rapid evolution of AI scams: Traditional security measures struggle to keep up with AI-driven attacks.
- Human trust in digital communication: People tend to trust emails and calls from official-looking sources, making AI scams highly effective.
- Lack of awareness: Many students and faculty still rely on outdated scam detection methods and may not recognize AI-generated deception.
How can university students, faculty, and staff protect themselves against AI-generated scams?
Individuals can protect themselves with these steps:
- Do not trust, verify: If an email, text, or call seems suspicious, confirm its legitimacy through a separate channel.
- Be cautious with online relationships: Never send money to someone you haven’t met in person, and be skeptical of rapid emotional commitments.
- Enable multi-factor authentication (MFA) wherever possible: This adds an extra layer of security, even if your credentials are compromised.
- Avoid clicking on links in unexpected emails: Always navigate to official university websites manually instead.
- Stay informed: Attend university cybersecurity awareness events and keep up with emerging scam tactics, such as our new scam and fraud catalog.
What is your top advice for students, faculty, and staff looking to stay ahead of AI-generated cyberthreats?
Be skeptical. Whether it’s a too-good-to-be-true online romance, a high-pressure email demanding immediate action, or a “professor” asking for sensitive data, take a step back and verify. AI-driven scams are getting smarter, but critical thinking and due diligence are your best defenses. Illinois State University uses strong cybersecurity tools, but individuals must stay informed and cautious to avoid falling victim to AI-powered deception.
This Valentine’s Day don’t let cybercriminals steal your heart—or your information. Whether it’s a romance scam or a phishing email disguised as a university message, take the time to think before you click, reply, or send money. Your awareness is the best defense.