Why? Because human beings trust by default. Without trust, we wouldn’t survive as a species. From the day we come out of our mama, we simply want and need to trust others in order to survive. And when you mix media in any form, media that we have grown to trust over a lifetime, and then weave in financial fraud, the likelihood of someone, somewhere falling for the AI ruse, is, inevitable. Artificial intelligence based “social engineering” scams are quickly becoming the purest, most effective form of psychological manipulation.
Deepfake artificial intelligence scams are the digital equivalent of a sociopathic – psychopathic – narcissistic – gaslighting – violent predator.
PT Barnum Was Wrong
It is said, “There’s a sucker born every minute”. I’m pretty sure there’s approximately 250-ish people born every minute. And by my calculations, every single one of them are suckers. Me, and you included. What does this mean? It means all of us are capable of being deceived. And I’ll bet, all of us have been deceived, or “suckered”. That’s simply a hazard of “trusting by default”.
Just take a look at the evolution of the simple “phishing email scam”. Over the past 20 years, this ruse has evolved from a blanketed broadcasted “scammer grammar” communication, to an advanced persistent threat that targets specific individuals by understanding and leveraging all aspects of their personal and professional lives.
In this era of rapid technological progress and AI integration, staying informed about the latest scams is imperative for everyone. The preceding year, witnessed a tumultuous landscape in cybersecurity, marked by major corporations falling victim to malware attacks, ransomware, and the proliferation of opportunities for cybercriminals due to the advancements in AI. Regrettably, the forecast indicates a further escalation in the sophistication and prevalence of cyber threats and scams, making it essential for individuals to remain vigilant and proactive in safeguarding their digital assets.
Consider Deepfake AI “Havoc Wreaked”
The rapid proliferation of deepfake websites and apps is wreaking havoc, unleashing a wave of financial and personal fraud that uniquely threatens individuals and businesses alike.
The proliferation of deep fakes represents a troubling trend, fueled by the accessibility and sophistication of AI technology. Even the average technology user possesses tools capable of impersonating individuals given sufficient videos or images. Consequently, we must anticipate a surge in the utilization of both video and audio deep fakes within cyber scams. It’s already happening. Scammers exploit deep fake videos and/or audio to pose as superiors, soliciting urgent information.
Similarly, in personal spheres, these manipulative tactics may involve impersonating family members or friends to deceive individuals into divulging sensitive information or extracting funds from one bank account to pay for a kidnapping ransom. As ridiculous as that sounds, if you heard your daughters voice screaming in the background on a distant, cellular phone call, you’d likely cough up the cash if you thought your loved one was being held captive.
The rise of AI-enabled deep fakes presents a formidable challenge in combating financial fraud, as it provides cybercriminals with unprecedented capabilities. With the aid of AI, cybercrime syndicates can swiftly update and enhance traditional wire transfer fraud tactics, alongside sophisticated impersonation schemes. This rapid evolution jeopardizes the reliability of verification and authorization processes within the financial sector, thereby undermining trust and confidence in financial systems at large.
This Is Just the Beginning.
CNN reports A finance worker fell victim to a $25 million payout following a video call with a deepfake ‘chief financial officer’.
In a sophisticated scheme detailed by Hong Kong police, a finance worker from a multinational corporation fell prey to deepfake technology, resulting in a staggering $25 million payout to impostors posing as the company’s chief financial officer.
The elaborate ruse unfolded during a video conference call, where the unsuspecting employee found himself surrounded by what appeared to be familiar faces, only to discover they were all expertly crafted deepfake replicas. Despite initial suspicions sparked by a suspicious email, the worker’s doubts were momentarily quelled by the convincing likeness of his supposed colleagues.
This incident underscores the alarming effectiveness of deepfake technology in perpetrating financial fraud on an unprecedented scale. Despite believing that all participants on the call were genuine, the worker consented to transferring a staggering sum of $200 million Hong Kong dollars, equivalent to about $25.6 million. This incident is emblematic of a series of recent occurrences where perpetrators utilized deepfake technology to manipulate publicly available videos and other materials to defraud individuals.
Additionally, police noted that AI-generated deepfakes were utilized on numerous occasions to deceive facial recognition systems by mimicking the individuals. The fraudulent scheme involving the fabricated CFO was only uncovered after the employee reached out to the corporation’s head office for verification.
America’s Sweetheart was AI Sexually Violated
Authorities globally are sounding alarms over the advancements in deepfake technology and its potential for malicious exploitation. In a recent incident, AI-crafted pornographic images featuring the renowned American artist Taylor Swift flooded various social media platforms, highlighting just one of the perilous ramifications of artificial intelligence. These explicit images, depicting the singer in sexually provocative poses, garnered tens of millions of views before swift removal from online platforms.
Swift, a seasoned celebrity, undoubtedly, feels a sense of violation. Attribute similar circumstances to a quiet, 16-year-old highschooler, and he, or she, may implode under the pressure. These technologies have real life, and, even death, consequences.
The deepfake market delves into the depths of the dark web, serving as a favored resource for cybercriminals seeking to procure synchronized deepfake videos with audio for a range of illicit purposes, including cryptocurrency scams, disinformation campaigns, and social engineering attacks aimed at financial theft. Within dark web forums, individuals actively seek deepfake software or services, highlighting the high demand for developers proficient in AI and deepfake technologies, who often cater to these requests.
Don’t Expect Your Goberment to Fix This Problem
While the creation of deepfake software itself remains legal, the use of someone’s likeness and voice operates in a legal gray area due to the abundance of publicly available information. Although defamation suits against developers or users of deepfake content are plausible, locating them poses challenges similar to those encountered in identifying cybercriminals orchestrating other types of attacks. Legal frameworks surrounding deepfake apps vary by jurisdiction and intent, with the creation or dissemination of deepfake content intended for harm, fraud, or privacy violation being universally illegal.
Although not as prevalent as ransomware or data breaches, instances of deepfake incidents are on the rise, constituting a multi-billion dollar enterprise for cybercriminals.
Last year, McAfee reported a significant increase in deepfake audio attacks, with a staggering 77% of victims succumbing to financial losses. As cybercriminals refine their deepfake techniques, organizations must enhance user education and awareness, incorporating training programs that emphasize the risks associated with deepfake technology and the importance of verifying information through multiple channels.
Efforts to develop advanced AI-based detection tools capable of identifying deepfakes in real-time are ongoing, though their efficacy remains a work in progress, particularly against more sophisticated deepfake creations. However, criminals, using AI for fraud are always two steps ahead and awareness training is often two steps behind due to lack of implementation.
Protect Yourself and Your Organization:
When encountering a video or audio request, it’s essential to consider the tone of the message. Does the language and phrasing align with what you’d expect from your boss or family member? Before taking any action, take a moment to pause and reflect. Reach out to the purported sender through a different platform, ideally in person if possible, to verify the authenticity of the request. This simple precaution can help safeguard against potential deception facilitated by deepfake technology, ensuring you don’t fall victim to impersonation scams.
1. Stay Informed: Keep abreast of the latest developments in AI technology and its potential applications in scams. Regularly educate yourself about common AI-related scams and tactics employed by cybercriminals.
2. Verify Sources: Be skeptical of unsolicited messages, especially those requesting sensitive information or financial transactions. Verify the identity of the sender through multiple channels before taking any action.
3. Use Trusted Platforms: Conduct transactions and communicate only through reputable and secure platforms. Avoid engaging with unknown or unverified sources, particularly in online marketplaces or social media platforms.
4. Enable Security Features: Utilize security features such as multi-factor authentication whenever possible to add an extra layer of protection to your accounts and sensitive data. Implementing multi-factor authentication within a secure portal environment for sensitive actions, such as financial transactions or the release of confidential information, serves as a crucial defense against fraudulent requests facilitated by deepfake technology.
5. Update Software: Keep your devices and software applications up to date with the latest security patches and updates. Regularly check for software updates to mitigate vulnerabilities exploited by AI-related scams.
6. Scrutinize Requests: Scrutinize requests for personal or financial information, especially if they seem unusual or come from unexpected sources. Cybercriminals may use AI-generated content to create convincing phishing emails or messages.
7. Educate Others: Share knowledge and awareness about AI-related scams with friends, family, and colleagues. Encourage them to adopt safe online practices and be vigilant against potential threats.
8. Verify Identities: Before sharing sensitive information or completing transactions, verify the identity of the recipient using trusted contact methods. Beware of AI-generated deepfake videos or audio impersonating trusted individuals.
9. Be Wary of Unrealistic Offers: Exercise caution when encountering offers or deals that seem too good to be true. AI-powered scams may promise unrealistic returns or benefits to lure victims into fraudulent schemes.
10. Report Suspicious Activity: If you encounter suspicious AI-related activity or believe you have been targeted by a scam, report it to relevant authorities or platforms. Prompt reporting can help prevent further exploitation and protect others from falling victim to similar scams.
None of the above, by itself, will solve this problem. And I cannot stress this enough, organizations and their staff must engage in consistent and ongoing security awareness training, now more than ever.
And that does not mean simply deploying phishing simulation training by itself. While phishing simulation training is necessary for “check the box” compliance, it only addresses one aspect of fraud prevention and social engineering. Phish sim doesn’t come close to solving the problem of artificially intelligent psychological manipulation.
Robert Siciliano CSP, CSI, CITRMS is a security expert and private investigator with 30+ years experience, #1 Best Selling Amazon.com author of 5 books, and the architect of the CSI Protection certification; a Cyber Social Identity and Personal Protection security awareness training program. He is a frequent speaker and media commentator, and CEO of Safr.Me and Head Trainer at ProtectNowLLC.com.