"They address us by name. They are aware of our residences and our banking institutions." That's how Detective David Coffey of the Toronto Police Service described AI-enhanced fraud at a March 2026 press briefing. His warning echoes what the FTC has been documenting in the US: AI tools have transformed scams from blunt instruments into precision weapons.

The nature of the scam — impersonation, romance fraud, fake charity, investment scheme — hasn't changed. What's changed is the operational advantage scammers now carry into every interaction.

How AI Has Supercharged Scams in 2026

Traditional scams relied on volume: send a million emails, and even a 0.01% response rate generates revenue. Modern AI-powered scams are different. They're targeted, personalized, and built on real data about you.

Here's what the AI tools used by scammers can do:

  • Profile building in seconds. AI scrapes and cross-references your social media posts, public records, LinkedIn profile, and data breach records to build a detailed personal profile — including your employer, hometown, family members' names, vehicles, pet names, and recent life events.
  • Real-time impersonation scripting. AI can generate scripts in real time that reference specific details about you, making impersonations of your bank, the IRS, or even family members convincingly personalized.
  • Voice cloning. AI voice synthesis can clone a person's voice from as little as 3 seconds of audio. This is used in grandparent scams, where a "grandchild" calls in distress, and in business email compromise calls where a "CEO" or "CFO" requests an urgent wire transfer.
  • Deepfake video calls. Real-time face-swapping technology is now available to non-technical users. What appears to be a video call from a government official, romantic partner, or employer may be AI-generated.
  • Automated follow-up and relationship building. AI can maintain fake romantic or professional relationships over weeks and months through automated text and email, building trust before the financial request arrives.

The Key Insight: Your Old Advice Doesn't Work Anymore

Detective Coffey made a stark observation at the Toronto Police fraud prevention briefing: "Previous recommendations aimed at preventing scams — focused on safeguarding one's identity — are now outdated, as individuals' personal data is 'already accessible.'"

This is the uncomfortable truth of 2026: assuming you can keep your personal information private is no longer realistic. Between the billions of records exposed in data breaches and the vast volume of data people share voluntarily on social media, most people's core personal information is already in circulation.

The March 2026 data breach wave illustrates this clearly:

Breach (March 2026) Records Exposed Data Types
IDMerit ~1 billion Names, addresses, DOB, national IDs, phone numbers
Stryker Corporation ~50TB of data SSNs, health info, employment records
Navia Benefit Solutions 2.7 million SSNs, FSA/HSA data, health plan information
Wynn Resorts 800,000 Guest PII records
Betterment 1.4 million accounts Financial account information

The old model — protect your data and you'll be safe — assumes a data perimeter that no longer exists. The new model requires assuming breach and building defenses that work even when scammers know who you are.

The FTC Warning: Iran-Themed Scams Are a New Twist on Old Tactics

In March 2026, the FTC's Consumer Alert archive documented a new pattern: scammers using the global conflict with Iran as a storyline for imposter, romance, and fake charity scams. The approach is consistent with how scammers have historically exploited major news events — COVID-19, Ukraine, Afghanistan — to add emotional weight to their pitches.

Common Iran-themed scam scenarios include:

  • A military service member stationed in the region who needs money for emergency leave
  • A stranded aid worker who needs funds transferred to get home safely
  • A government official requesting urgent financial assistance from a "trusted contact"
  • A fake charity collecting donations for victims of the conflict

In every case, the scammer's goal is to create emotional urgency and exploit your willingness to help — then redirect that impulse toward sending money or sharing financial information. The Iran angle changes the story; the mechanics are identical to scams you've seen before.

What Actually Works Against AI Scams in 2026

Since you can no longer assume your personal data is private, your defenses have to shift to behaviors and tools that protect you regardless of what scammers already know about you.

1. Slow Down — Every Time

Detective Coffey's advice: "Slow down." Scammers rely on urgency to override your judgment. A real bank, real government agency, or real employer never requires you to act within minutes. Any communication demanding immediate action — especially involving money, account access, or personal information — should be treated as suspicious by default.

2. Verify Through a Different Channel

If someone calls claiming to be your bank, hang up and call the number on the back of your card. If an email claims to be from the IRS, go directly to irs.gov — never click the link in the email. If a "family member" texts claiming to be in trouble, call their known number before doing anything. Scammers can spoof caller ID and send convincing emails; they cannot intercept a call you initiate to a verified number.

3. Establish a Family Safe Word

The Toronto Police specifically recommended this. Agree on a code word or phrase with family members that can be used to verify their identity on calls. If a "grandchild" or "child" calls in distress and can't provide the safe word — it's a scam.

4. Monitor for Downstream Identity Theft

AI scams don't just steal money. The personal information extracted through scams — and through the data breaches that fuel them — is often used for downstream identity theft: new credit accounts, tax fraud, medical identity theft. Ongoing credit monitoring and dark web monitoring gives you visibility into whether your information is being actively exploited, even if you were careful enough not to fall for a scam directly.

We may earn a commission if you purchase through links on this page. Full disclosure.

5. Use Real-Time Alerts on Financial Accounts

Enable transaction notifications on every bank account, credit card, and investment account. Real-time alerts mean you see unauthorized activity within minutes instead of weeks — dramatically reducing the window of opportunity for thieves to compound the damage.

For more on AI-powered fraud, see our guides on AI voice cloning fraud and our complete AI scams guide. If you've already been targeted, our guide for scam victims covers your recovery steps in detail.

The Practical Bottom Line

The scam landscape in 2026 is not more dangerous because criminals have become more creative. It's more dangerous because they've become more efficient — using AI to do at scale what previously required skilled operators, and using your own personal data to make their approach impossible to dismiss on first contact.

Your best defenses are behavioral: slow down when pressed, verify through known channels, and never treat urgency as proof of legitimacy. Combined with real-time monitoring to catch fraud that slips through, these habits give you a meaningful advantage even when scammers know exactly who you are.

We may earn a commission if you purchase through links on this page.

Real-Time Identity Monitoring — Catch Fraud Before It Spreads

Aura monitors your credit, dark web, and financial accounts 24/7 — alerting you in near real-time when something looks wrong. Up to $1M in identity theft insurance included.

Try Aura Free for 14 Days →

Affiliate link — Aura is available for US residents.