Skip to content

Soaring AI impersonation frauds alarm security professionals in 2025, offering advice for staying secure

Beware of accepting voice or video messages without critical examination

Soaring AI impersonation frauds in 2025 flagged by security professionals - tips for remaining...
Soaring AI impersonation frauds in 2025 flagged by security professionals - tips for remaining secure

Soaring AI impersonation frauds alarm security professionals in 2025, offering advice for staying secure

In the rapidly evolving world of technology, a new form of cyber threat has emerged – AI impersonation scams. These scams, which can mimic a person's voice, face, or typing style with alarming accuracy, are becoming increasingly sophisticated and difficult to detect.

According to cybersecurity firm Paubox, nearly half (48%) of AI-generated phishing attempts, including voice and video clones, successfully bypass current email and call security systems. This worrying statistic highlights the urgent need for enhanced security measures.

Cybersecurity expert Jacqueline Jayne advises pairing direct verification with multi-factor authentication (MFA), particularly during periods of high scam activity, such as during tax season. MFA makes it harder for scammers to gain access to your accounts even if they manage to steal your credentials.

The rapid growth of AI impersonation scams in 2025 is attributed to better technology, lower costs, and wider accessibility. In fact, AI impersonation scams surged by 148% in 2025, according to Moonlock.

Attackers in AI impersonation scams often assume the identity of someone you trust – a family member, a boss, or even a government official. They create a false sense of urgency, relying on victims to act quickly without verifying the identity of the caller or sender.

However, there are signs that can help you identify these scams. For instance, AI-generated voices can have unusual pauses or inconsistent background noise, even if they sound convincing at first. If you receive a suspicious call or video from someone you know, hanging up and calling them back on the number you already have can eliminate the urgency scammers count on.

Deepfake videos can have subtle red flags such as unnatural mouth movements, flickering backgrounds, or eye contact that feels a little 'off'. The Take9 initiative suggests that pausing for nine seconds can help in confirming the identity of a caller before taking action.

It's important to note that these scams are not just a threat to individuals but also to businesses. In a 2024 case, threat actors posed as the CFO of UK-based engineering company Arup and tricked its employees into authorizing transfers totaling a whopping $25 million.

The cybercriminal organizations involved in AI identity fraud include state-backed Advanced Persistent Threat (APT) groups primarily from Russia, Iran, and China, who use AI tools to enhance cyberattack productivity and reconnaissance. Additionally, criminal groups exploit AI-driven bots and deepfake technologies for cryptocurrency scams and impersonation in phishing attacks.

Recently, the FBI warned about AI-generated calls pretending to be US politicians, including Senator Marco Rubio. These impersonated voices can be very convincing, even tricking trained professionals, as warned by the US Senate Judiciary Committee.

In conclusion, while AI impersonation scams pose a significant threat, being vigilant and taking the time to verify the identity of the caller or sender can help protect you from falling victim to these scams. Always remember, if something seems too good to be true, or if you're being rushed into making a decision, it's probably a scam. Slow down, verify, and stay safe.

Read also:

Latest