How and Why “Fun” AI Generated Spam On Social Media Will Manipulate the 2024 Election

The primary intention behind artificial intelligence (AI) generated spam on social media appears to be financial gain through deceptive means. Facebook algorithms are suggesting users to visit, view and like pages that are 100% artificially intelligent generated photos of people, places, and things that are simply not real.

Artificial Intelligence

The content includes too good to be true pictures of everyday people, their projects that are to most of us “extraordinary” in their nature. This might include a crudites made to look like the face of Jesus. Or someone crocheting a child’s amazing sweater, or something as simple as 103 year old woman’s birthday celebration. All fake, all designed to engage us. And that engagement is 100% trickery.

AI Enables High Volume of Engaging Content

AI tools like text and image generators allow spammers to produce large volumes of visually appealing and engaging content cheaply and quickly. This AI-generated content draws attention and interactions (likes, comments, shares) from users, signaling to social media algorithms to promote it further.

Driving Traffic for Monetary Gain

The engaging AI posts often contain links or lead to external websites filled with ads, allowing spammers to generate ad revenue from the traffic. Some spammers use AI images to grab attention, then comment with spam links on those posts. The ultimate goal is to drive traffic to these ad-laden websites or promote dubious products/services for profit. This same content can be directed towards the election process and fake websites containing photos, videos, and content to manipulate hearts and minds on why and who they should vote for.

Circumventing Detection

AI allows spammers to generate unique content at scale, making it harder for platforms to detect patterns and filter out spam. As AI language models improve, the generated content becomes more human-like, further evading detection.

Spreading Misinformation

While profit is the primary motive with social media related spam, AI-generated spam can also be leveraged to spread misinformation and false narratives on social media. Automated AI bots can amplify misinformation campaigns by flooding platforms with synthetic content.

In essence, AI provides spammers with powerful tools to create deceptive, viral content that circumvents detection while enabling them to monetize through dubious means like ad farms, product promotion, or even misinformation in election campaigns.

And spreading misinformation is exactly how generated artificially intelligent spam “socializes” the process of election manipulation. Over decades and decades, we have come to believe most if not everything we see, everything we read, and therefore we go deeper into the rabbit hole of fakery.

Joe Biden Deepfake in New Hampshire

In May 2024, a New Hampshire man named was fined $6 million by the Federal Election Commission for creating and distributing a deep fake audio clip that falsely portrayed President Joe Biden making controversial statements.

The man used advanced AI technology to generate a synthetic version of Biden’s voice, making it appear the President said things he never actually said. The deep fake audio was released online just weeks before the election and quickly went viral on social media.

The FEC determined the mans actions constituted an “expensive virtual disinformation campaign” aimed at undermining the election process. His $6 million fine is the largest ever levied by the FEC for such a violation of election laws prohibiting the distribution of disinformation and deep fakes intended to sway voters.

This case highlights the growing threat of deep fake technology being weaponized to mislead the public and interfere in U.S. elections. It has prompted calls for stricter regulations around the creation and dissemination of synthetic media.

Is There Any Way to Stop It?

There are several measures that can be taken to prevent AI from being used to spread misinformation during elections:

AI System Design

·         Implement robust fact-checking and verification processes into AI systems to ensure they do not generate or amplify false or misleading information.

·         Train AI models on high-quality, fact-based data from reliable sources to reduce the risk of learning and propagating misinformation.

·         Build in safeguards and filters to flag potential misinformation and disinformation attempts.

Regulation and Oversight

·         Enact laws and regulations governing the use of AI in elections and political campaigns to prohibit manipulative tactics.

·         Establish independent oversight bodies to audit AI systems for fairness, accuracy and resistance to misinformation.

Public Awareness

·         Increase public education about AI capabilities and limitations to raise awareness of artificial intelligence and deepfakes potential misuse.

·         Promote media literacy to help people identify misinformation and verify information sources.

Collaboration

·         Foster collaboration between AI developers, election officials, fact-checkers and civil society to share best practices.

·         Support research into AI-powered misinformation detection and prevention methods.

Ultimately, a multi-stakeholder approach involving responsible AI development, strong governance, public engagement and cross-sector partnerships will be crucial to mitigating the risks of AI-enabled misinformation during elections.

Robert Siciliano CSP, CSI, CITRMS is a security expert and private investigator with 30+ years experience, #1 Best Selling Amazon author of 5 books, and the architect of the CSI Protection certification; a Cyber Social Identity and Personal Protection security awareness training program. He is a frequent speaker and media commentator, and CEO of Safr.Me and Head Trainer at ProtectNowLLC.com.

Artificial Intelligence and Organized Crime Sitting In a Tree…

K.I.S.S.I.N.G. First came love, then came marriage, then came the baby in the baby carriage! Sucking his thumb, wetting his pants, doing the hula – hula dance! And the BABY is a Boy!

The Yahoo Boys.

The Yahoo Boys are a notorious group of cyber criminals operating out of West Africa, primarily Nigeria. While most scammers try to stay under the radar, the Yahoo Boys are brazen – they openly advertise their fraudulent activities across major social media platforms like Facebook, WhatsApp, Telegram, TikTok, and YouTube.

An analysis by WIRED uncovered a vast network of Yahoo Boy groups and accounts actively sharing scamming techniques, scripts, and resources. There are nearly 200,000 members across 16 Facebook groups alone, not to mention dozens of channels on WhatsApp, Telegram, TikTok, YouTube, and over 80 scam scripts hosted on Scribd. And this is likely just scratching the surface.

The Yahoo Boys aren’t a single organized crime syndicate, but rather a decentralized collective of individual scammers and clusters operating across West Africa. Their name harks back to the notorious Nigerian prince email scams, originally targeting users of Yahoo services. But their modern scamming operations are vast – from romance fraud to business email compromise and sextortion.

The scams themselves are getting more psychologically manipulative and technologically advanced. Classic romance scams now incorporate live deepfake video calls, AI-generated explicit images, even physical gifts like food deliveries to build trust with victims. One particularly disturbing trend is the rise in sextortion schemes, with cases linked to dozens of suicides by traumatized victims.

Artificial intelligence (AI) is being exploited by cybercriminals such as the Yahoo Boys to automate and enhance various aspects of social engineering scams.

Here are some ways AI is being used in social engineering attacks:

1. Natural Language Generation: AI models can generate highly convincing and personalized phishing emails, text messages, or social media posts that appear to come from legitimate sources. These AI-generated messages can be tailored to specific individuals or organizations, making them more believable and increasing the likelihood of success.

2. Voice Cloning: AI can be used to clone or synthesize human voices, allowing scammers to impersonate trusted individuals or authorities over the phone. This technique, known as voice phishing or “vishing,” can trick victims into revealing sensitive information or transferring funds.

3. Deepfakes: AI-powered deepfake technology can create highly realistic video or audio content by manipulating existing media. Cybercriminals can use deepfakes to impersonate individuals in video calls or create fake videos that appear to be from legitimate sources, adding credibility to their social engineering attempts.

4. Sentiment Analysis: AI can analyze the language, tone, and sentiment of a victim’s responses during a social engineering attack, allowing the attacker to adapt their approach and increase the chances of success.

5. Target Profiling: AI can analyze vast amounts of data from various sources, such as social media profiles, public records, and online activities, to create detailed profiles of potential victims. These profiles can be used to craft highly personalized and convincing social engineering attacks.

6. Automated Attacks: AI can automate various aspects of social engineering campaigns, such as identifying potential victims, generating and sending phishing emails or messages, and even engaging in real-time conversations with targets.

While AI can be a powerful tool for cybercriminals, it is important to note that these technologies can also be used by security researchers and organizations to detect and mitigate social engineering attacks. However, the ongoing advancement of AI capabilities poses a significant challenge in the fight against social engineering and requires vigilance and continuous adaptation of security measures.

Insidious Meets Prolific

What makes the Yahoo Boys particularly insidious is their bold presence on mainstream social platforms. They use these as virtual “office spaces,” sharing step-by-step scripts, explicit images and videos of potential victims, fake profiles, even tutorials on deploying new AI technologies like deepfakes and voice cloning for their scams. It’s a massive con operation happening in plain sight.

Despite social media’s stated policies against fraud and illegal activities, the companies have struggled to keep up with the Yahoo Boys’ prolific output. Although the major platforms removed many of the specific groups and accounts identified by WIRED, new ones continue popping up daily, exploiting gaps in moderation and content policies.

Cybersecurity experts are sounding the alarm that social platforms are providing safe harbor for these transnational cyber criminal gangs to recruit, share resources, and execute increasingly sophisticated frauds with global reach and real-world consequences. While the “Yahoo Boy” monikers imply a relatively harmless group of young tricksters, the reality is a vast and dangerous network of techno-savvy con artists causing significant financial and psychological harm on an industrial scale.

Law enforcement and the tech giants are struggling to get a handle on this viral scamming epidemic. As new AI capabilities get folded into the Yahoo Boys’ arsenal of malicious tools and tactics, the need for a coordinated global crackdown is becoming more urgent. No longer just a nuisance of sketchy email schemes, this criminal community represents an escalating threat operating in the open on our most popular social media platforms.

I personally am getting ready to crawl under a rock, and maybe move into a cave deep in the woods of Montana to escape the onslaught of artificial intelligence scams. But maybe you are tougher than I am. If you are, I suggest adhering to these tips:

Here are 11 tips to protect yourself from AI-powered social engineering scams:

1.      Be wary of unsolicited communication, even if it appears to come from a trusted source. Verify the authenticity of the message or request through official channels. You know, pick up the phone. Send them a text message. Meet them in person.

2.      Enable multi-factor authentication for your accounts and devices to add an extra layer of security beyond just passwords. This has nothing to do with artificial intelligence scams. You should just do it because it makes you a tougher target.

3.      Keep your software and operating systems up-to-date with the latest security patches to mitigate vulnerabilities that could be exploited. Same, just do it.

4.      Be cautious of urgent or high-pressure requests, as these are common tactics used in social engineering attacks. This goes for all social engineering scams.

5.      Scrutinize the language and tone of messages for inconsistencies or anomalies that may indicate AI-generated content. If you feel your blood pressure going up, it’s fraud. It’s always fraud.

6.      Verify the authenticity of voice calls or video conferences, especially if they involve requests for sensitive information or financial transactions. Again, pick up the phone, be persistent, meet them in person and verify the authenticity not just by yourself, get others involved.

7.      Be skeptical of overly personalized or tailored messages, as AI can analyze your online presence to craft convincing lures. Every communication from a scammer is designed to get you to trust them. Do everything in your power to be skeptical.

8.      Educate yourself and stay informed about the latest AI-powered social engineering techniques and scams. Yeah, just read my newsletter. I’ll keep you up to speed.

9.      Implement robust security measures, such as email filtering, web content filtering, and endpoint protection, to detect and block potential threats. Your IT people should have systems in place. But even those systems can be compromised by human hacking.

10.  Report any suspected social engineering attempts to the relevant authorities and organizations to help identify and mitigate emerging threats. Those relevant authorities start with your internal people.

11. Cyber security awareness training educates employees about threats, best practices, and their role in protecting company data and systems. It reduces human error, promotes a security-conscious culture, mitigates risks, and enhances an organization’s overall cyber resilience.

By staying vigilant, verifying information, and implementing appropriate security measures, you can significantly reduce your risk of falling victim to AI-powered social engineering scams.

Robert Siciliano CSP, CSI, CITRMS is a security expert and private investigator with 30+ years experience, #1 Best Selling Amazon author of 5 books, and the architect of the CSI Protection certification; a Cyber Social Identity and Personal Protection security awareness training program. He is a frequent speaker and media commentator, and CEO of Safr.Me and Head Trainer at ProtectNowLLC.com.

Why EVERYONE is Resistant to Engaging in Security Practices and How to Fix It

It’s everyone. (It’s you too. Just read.) Security goes against our core beliefs. Security is not natural, it’s not normal, it means that we don’t trust others. However, we trust by default. Not trusting others is actually a learned behavior. Security means that you are aware that there are others out there that may choose you as their target. That’s not normal. It’s not natural. No-one wants to think they are a target.

What’s normal is that we live happily ever after, we live together as one species in harmony. We trust each other, we are good to each other, we treat others as we want to be treated. We don’t hit, hurt, harm or take from one another. We are civilized creatures.

However, there is a small percentage of predators, uncivilized beings, we call them sociopaths, psychopaths, and hard-core narcissists. They are the criminal hackers, the serial killers, the rapists. They are a minority, and we choose to think they don’t exist. Or at least we deny they would choose us. We resist security practices, because it goes against what it means to be a civilized being.

Therefore, in addition to the above, consumers (you) may be resistant to cybersecurity awareness training for several reasons:

1. Perceived inconvenience. Some may view cybersecurity training as an additional task or inconvenience, especially if they believe it interrupts their regular activities. Which is all nonsense. If you thought your bank was being targeted, would you do something about it? Of course. Beyond the perceived inconvenience, we are tired, lazy and selfish. That’s actually normal too.

2. Lack of perceived relevance. Some individuals may not see the immediate relevance of cybersecurity to their daily lives, leading them to ignore or resist training efforts. This is frustrating for your IT directors, and it is also frustrating for your government who see you, and I, as part of the problem regarding our critical infrastructure being vulnerable. Cyber security is relevant if you want to keep the lights on, have clean water, and heat your home. 

3. Overwhelm. The complexity of cybersecurity topics can overwhelm consumers, making them feel incapable of understanding or implementing the necessary precautions. I blame pretty much every cyber security awareness training company out there. It’s not all about phishing simulation training. None of these companies have a clue when it comes to teaching individuals about risk. It’s not “do this, don’t do that” they have forgot what it means to be human.

4. Denial. Some people may deny the importance of cybersecurity or believe that they won’t be targeted by cyber threats, leading them to dismiss training efforts. Denial is more natural and more normal than recognizing risk. Denial is comfortable, it’s soothing, and it allows us to avoid the anxiety of “it really can happen to me”

5. Fear of technology. Individuals who are not confident in their technological abilities may feel intimidated by cybersecurity training, leading them to avoid it altogether. This, of course makes total sense. How many times have you gone in a vicious circle, a constant loop of not being able to log into an account because of two factor authentication not working or something else out of whack? Technology can be frustrating. If security is not easy, people aren’t going to do it.

6. Lack of awareness. Some consumers may simply not be aware of the risks posed by cyber threats, leading them to underestimate the importance of cybersecurity training. This is a real problem. This lack of attention to what your options are regarding anything security is common. Part of that lack of awareness stems from disbelief these things can happen to us, denial we can be targeted, and a relative “pacifist” attitude.

Addressing these barriers requires organizations to tailor their cybersecurity awareness training programs to be engaging, relevant, and accessible to all consumers. This can involve using clear language, providing real-life examples, and offering support for individuals who may struggle with technology or cybersecurity concepts. It also means getting “real”. And cyber security awareness training companies aren’t going to do that, nor are their 2 dimensional employees, and most of them don’t have the ability to get down and dirty and speak “holistically” about life and security in the same sentence.

Encouraging computer users to engage in cybersecurity awareness training involves several strategies:

1. Relevance. Highlight the relevance of cybersecurity to their personal and professional lives. Emphasize how it can protect their data, finances, and privacy.

2. Interactive Training. Offer engaging and interactive training modules that include simulations, quizzes, and real-life scenarios to make the learning experience more enjoyable and practical.

3. Incentives. Provide incentives such as certifications, badges, or rewards for completing cybersecurity training. Recognition for their efforts can motivate users to participate.

4. Customization. Tailor training content to the specific needs and interests of different user groups. For example, employees in finance may require different training than those in marketing.

5. Regular Updates. Keep the training content up-to-date with the latest cybersecurity threats and best practices. This demonstrates the importance of ongoing learning in an ever-evolving digital landscape.

6. Leadership Support. Gain support from organizational leaders and managers to promote the importance of cybersecurity training. When leadership emphasizes its importance, employees are more likely to prioritize it.

7. Accessibility. Make training accessible by offering multiple formats such as online courses, in-person workshops, and mobile-friendly materials. This accommodates different learning preferences and schedules.

8. Feedback and Support. Provide avenues for users to ask questions, seek clarification, and provide feedback on the training materials. Addressing their concerns and offering support can increase engagement.

By implementing these strategies, organizations can create a culture of cybersecurity awareness where users are motivated and empowered to protect themselves and their data online.

Robert Siciliano CSP, CSI, CITRMS is a security expert and private investigator with 30+ years experience, #1 Best Selling Amazon author of 5 books, and the architect of the CSI Protection certification; a Cyber Social Identity and Personal Protection security awareness training program. He is a frequent speaker and media commentator, and CEO of Safr.Me and Head Trainer at ProtectNowLLC.com.