Threat actors are leveraging a newly discovered deepfake tool, ProKYC, to bypass two-factor authentication on cryptocurrency exchanges, which is designed specifically for NAF (New Account Fraud) attacks and can create verified but synthetic accounts
by mimicking facial recognition authentication.The prevalence of such attacks is increasing, with losses exceeding $5.3 billion in 2023 alone, where the sophistication of ProKYC highlights the growing threat posed by deepfake technology to financial
institutions.AI-powered tools are enhancing
cybercriminals’ ability to bypass multi-factor authentication (MFA) by generating highly realistic forged documents, where traditionally, fraudsters relied on low-quality scanned documents purchased from the dark web.
However, AI-driven tools can now create highly detailed forged documents that are difficult to distinguish from authentic ones, making it easier for cybercriminals to deceive security systems and gain unauthorized access to sensitive information, which poses a significant challenge to organizations seeking to protect their data and systems from malicious attacks.
ProKYC’s deepfake tool is malicious software sold on the dark web that exploits deep learning technology to circumvent authentication processes, which can generate counterfeit documents and realistic videos of fabricated identities, thereby deceiving facial recognition systems.
The tool’s effectiveness is demonstrated by its ability to bypass ByBit’s security measures. This poses a significant threat to online platforms as it undermines their authentication mechanisms and facilitates fraudulent activities.The attacker leverages
AI-generated deepfakes to create a synthetic identity complete with a forged government document (e.g., Australian passport) and a facial recognition bypass video.
The video adheres to facial recognition system instructions (e.g., head movements) and is fed into the system instead of a live camera feed, deceiving the system and facilitating a successful account fraud attack.Detecting account fraud attacks is challenging
due to the trade-off between restrictive biometric authentication systems that lead to false positives and lax controls that increase the risk of fraud. High-quality images and videos, often indicative of digital forgeries, are red flags. Inconsistencies in facial parts
and unnatural eye and lip movements during biometric authentication can also signal potential fraud and require manual verification.
According to Cato Networks, organizations must proactively defend against AI threats by collecting threat intelligence from various sources, including human and open-source intelligence.
Source of potential text plagiarism
Hello.
Plagiarism is the copying & pasting of others' work without giving credit to the original author or artist.
We would appreciate it if you could avoid plagiarism of content (full or partial texts, videos, photography, art, etc.).
Thank you.
Guide: Why and How People Abuse and Plagiarise
If you believe this comment is in error, please contact us in #appeals in Discord.