Cybercriminals are Targeting US Businesses with Malicious USB Drives

The FBI released a warning for US businesses – about a cybercriminal group from Eastern Europe trying to hack into the networks of US companies by mailing these businesses USB drives with malicious code on them.

Cybercriminals are Targeting US Businesses with Malicious USB Drives

This cybercriminal group, known as FIN7, is based in Eastern Europe, and US officials believe that they are responsible for billions of dollars in both business and consumer losses in both the US and abroad. The Justice Department has blamed FIN7 for stealing millions of credit card numbers in 47 states, and the FBI has been on the group’s tail for years.

This highly organized and sophisticated group attempts to infiltrate corporate networks by employing a seemingly old-fashioned, yet remarkably effective, tactic: mailing physical USB drives containing malicious code directly to businesses.

One of the most dangerous threats is a “BadUSB” attack. Plugging in a random USB drive, whether found on the ground or received as a freebie at a conference, poses significant cybersecurity risks. This seemingly innocuous act can lead to severe consequences for your computer and personal or corporate data.

These aren’t just regular storage devices; their firmware has been reprogrammed to act as other devices, most commonly a keyboard. When plugged in, the BadUSB instantly mimics typing commands, often at superhuman speed, which can then download malware, install ransomware, steal data, or even grant remote control to attackers. It bypasses typical antivirus scans because it’s not a “file” being scanned; it’s a device behaving maliciously.

These attacks have been going on for decades, primarily targeting companies in the defense, transportation, finance and insurance sectors. The mailed USB drives are often disguised as legitimate deliveries, arriving via services like the U.S. Postal Service and UPS. Some packages pretend to be from the Department of Health and Human Services (HHS), while others mimic Amazon deliveries, complete with fake “thank you” letters and counterfeit gift cards.

When an unsuspecting employee plugs one of these malicious USB drives into a computer, the device immediately registers itself as a Human Interface Device (HID) keyboard, rather than a storage device. This clever trick allows it to bypass many traditional security measures that block removable storage. Once recognized as a keyboard, the USB drive automatically injects a series of preconfigured keystrokes. These commands then download and install additional malware onto the compromised system, granting the cybercriminals remote access.

FIN7’s ultimate goal is to gain a foothold within the victim’s network, escalate privileges, and then deploy ransomware by gaining back door access to achieve their objectives. The success of this method hinges on human curiosity and the deceptive nature of the packages, making it particularly dangerous in environments where employees might not be rigorously trained on physical media security.

The FBI emphasizes that even a non-administrative account compromise can lead to significant breaches, as the attackers can then conduct reconnaissance and move laterally within the network to gain access to more critical systems. This resurgence of physical media attacks highlights the evolving tactics of cybercriminals and the need for businesses to educate their employees on the dangers of plugging in any unsolicited external devices.

Steps To Protect Your Self and Your Company Data

Thankfully, there are a number of steps that you can take in order to protect yourself and company data. Here are some tips:

  • Don’t put any “free” or unknown USB drive into your computer, no matter what. If you find a USB drive, or you are given one from a stranger, you should give it to your IT department or other security personnel. Don’t even put it near your computer – even if you think you can see the owner of the drive.
  • You also want to take full advantage of any security features you have access to including strong passwords and encryption on your own USB drives. You also want to make sure that you are backing up any data on those drives in case they are lost.
  • Keep your business and personal USB drives in separate places. You shouldn’t use your personal USB drive in your work computer, and vice versa.
  • Don’t use Autorun on your computer. This feature causes some types of media, such as DVDs, CDs, and USB drive to automatically open when they are put into a drive. When you disable this feature, if you insert a USB drive that is infected into your PC, it won’t open, and you can prevent the code from being put on your device.
  • Use security software and make sure it Is updated. Use antivirus software, a firewall, and anti-spyware programs to make your computer as safe as possible. Also, make sure you update your computer with any updates or patches that come through automatically.

Robert Siciliano CSP, CSI, CITRMS is a security expert and private investigator with 30+ years experience, #1 Best Selling Amazon author of 5 books, and the architect of the CSI Protection certification; a Cyber Social Identity and Personal Protection security awareness training program. He is a frequent speaker and media commentator, and CEO of Safr.Me and Head Trainer at ProtectNowLLC.com.

Think Your AI Pal is Harmless? Think Again. (Your Data is at Risk!)

AI companion apps, including AI girlfriend apps, present a range of security and privacy dangers that users should be aware of. These risks stem primarily from the intimate and personal nature of the interactions, the vast amount of sensitive data collected, and the profit-driven models of many of these applications.

Robert and his cohost retired CIA Spy Peter Warmka, discuss artificial intelligence girlfriends on their latest The Security Guy and CIA Spy PodBroadcast:

Apple Podcasts

Spotify Podcasts

Here’s a breakdown of the key concerns:

Privacy Dangers:

  • Extensive Data Collection: AI companions are designed to learn about you to provide a more personalized experience. This means they collect a massive amount of personal data, including:
  • Conversational Content: Every word you type or speak to the AI is recorded. This can include highly sensitive information about your thoughts, feelings, relationships, health, financial situation, work, and more.
  • User Profile Information: Your IP address, location, phone number, log-on data, device information, browser cookies, and network activity are often captured.
  • Inferred Data: The AI can infer additional details about you based on your conversations, such as your emotional state, interests, preferences, and even vulnerabilities.
  • Data Storage and Retention: This vast amount of sensitive data is stored on company servers, often indefinitely. Even if you delete chats, the data may still be retained for training the AI models.
  • Sharing with Third Parties: Many AI companion apps, being for-profit enterprises, monetize user relationships. This often involves sharing user data with third parties for targeted advertising or with data brokers. A review of popular AI companion apps showed that a significant majority use data for tracking and may link user data with third-party data from other apps and websites.
  • Lack of Transparency: Privacy policies can be lengthy, complex, and difficult for users to understand, making it hard to give truly informed consent about how their data will be used. Some apps are not transparent about how their AI systems are designed or moderated.
  • Data Sovereignty and Compliance Risks: If an AI app stores data in different jurisdictions or has vague privacy terms, your data could be routed through servers in regions with less stringent regulations, increasing exposure to risks.
  • Re-identification of Anonymized Data: Even if data is purportedly anonymized, there’s always a risk that with enough contextual information, seemingly anonymous data can be de-anonymized.
  • Voice Data Misuse: If voice interaction is enabled, collected voice recordings could be misused or even used to create voice deepfakes.

Security Dangers:

  • Data Breaches: Any system that stores large amounts of sensitive data is a target for cybercriminals. If an AI companion app’s servers are compromised, all the personal and intimate data you’ve shared could be exposed, leading to:
  • Identity Theft: Attackers could use leaked personal information for identity theft.
  • Financial Loss: Sensitive financial details, if shared, could lead to financial fraud.
  • Reputational Damage: Highly personal and embarrassing information could be leaked, causing significant reputational harm.
  • Emotional Distress: The violation of privacy and potential exposure of intimate conversations can cause immense emotional distress.
  • Weak Security Practices: Free or low-cost AI apps, in particular, may lack enterprise-grade security and rigorous security testing, creating vulnerabilities for cybercriminals. This includes:
  • Insufficient Encryption: Data in transit and at rest may not be adequately encrypted, making it easier for adversaries to intercept sensitive information.
  • Software Vulnerabilities: Flaws in the app’s code or underlying infrastructure can be exploited by hackers to gain unauthorized access.
  • Insecure Data Storage: Inadequate security protocols for data storage (e.g., unencrypted backups) can leave data exposed.
  • Prompt Injection and Manipulation: Attackers can use cleverly crafted prompts to manipulate the AI into revealing unintended information or performing malicious actions. While AI developers implement safeguards, these are constantly evolving.
  • Malware and Ransomware Spread: A compromised chatbot could be used to spread malware or ransomware to users’ devices.
  • Impersonation and Repurposing: A chatbot could be hacked and repurposed by malicious actors, leading users to reveal private data to an attacker while believing they are interacting with the legitimate service.
  • Training Data Poisoning: Malicious data could be introduced into the AI’s training set, altering its behavior or responses to be harmful or biased.

Other Significant Risks (Beyond direct security/privacy breaches):

  • Emotional Dependency and Social Withdrawal: The constant availability, patience, and non-judgmental nature of AI companions can lead to users forming deep emotional attachments, potentially reducing time spent on genuine human interactions and contributing to feelings of loneliness and social withdrawal.
  • Unhealthy Relationship Attitudes: Interactions with AI companions lack real-world boundaries and consequences, which can confuse users about mutual respect, consent, and healthy relationship dynamics.
  • Exposure to Harmful Content: Despite filters, some AI companions have been reported to engage in or generate sexually suggestive or inappropriate content, and can even provide inaccurate or dangerous advice on sensitive topics like self-harm, drug use, or mental health. This risk is particularly pronounced for younger, vulnerable users.
  • Misinformation and Hallucinations: AI can sometimes “hallucinate” or provide inaccurate information, which can be dangerous if users rely on it for serious life decisions (e.g., medical, financial, or relationship advice).
  • Algorithmic Bias: AI systems can unintentionally reflect biases present in their training data, leading to stereotypical or unsettling replies.

What users can do to mitigate risks:

  • Be Mindful of Shared Information: Avoid disclosing highly sensitive or personal information that you wouldn’t want publicly exposed.
  • Read Privacy Policies: While often complex, try to understand how your data will be collected, stored, and used.
  • Adjust Privacy Settings: Opt out of data collection for model training or data sharing if the app offers these options.
  • Use Strong Security Practices: Create strong, unique passwords, enable two-factor authentication if available, and keep your device’s operating system updated.
  • Consider Local Processing: If available, choose apps that process AI on your device rather than sending all data to the cloud.
  • Be Skeptical of Advice: Do not rely on AI companions for critical advice on health, finance, or relationships. Always cross-check information with verified sources or human professionals.
  • Maintain Real-World Connections: Remember that AI companions are not a substitute for genuine human relationships.

How Loneliness Attracts Scammers

The significance of loneliness is, but cannot be underestimated. Lonelinessis a widespread global issue, affecting a significant portion of the population. While exact numbers vary depending on the study, methodology, and definition of loneliness, it is estimated that as much as 25% of all humanity experiences, loneliness, and a regular basis. That means there is a mega market for this type of product therefore this type of vulnerability. Here’s a general overview of what recent data indicates:

Global Statistics:

  • Approximately 33% of adults worldwide report experiencing feelings of loneliness.
  • Nearly one in four adults globally (around 24%) reported feeling “very lonely” or “fairly lonely” in a recent Meta-Gallup survey covering over 140 countries. This translates to more than a billion individuals.

United States Statistics:

  • In the U.S., about 20% of adults reported feeling lonely “a lot of the day yesterday” as of late 2024.
  • Other surveys suggest that around one in three Americans (33%) experience loneliness on a regular basis.
  • 30% of adults reported experiencing feelings of loneliness at least once a week in early 2024, with 10% experiencing it every day.

Loneliness by Age Group (a common trend observed globally):

  • Younger Adults (18-34/45 years old): This demographic often reports the highest rates of loneliness.
  • Generation Z (18-24/29): Studies frequently show Gen Z as the loneliest generation, with rates often around 53% to 79% reporting feelings of loneliness.
  • Millennials: Also report high levels of loneliness, with some studies indicating around 72%.
  • 30% of Americans aged 18-34 report feeling lonely every day or several times a week.
  • Middle-Aged Adults: Loneliness tends to decrease through middle adulthood.
  • Older Adults (65 and older): Contrary to popular belief, older adults often report lower levels of loneliness compared to younger age groups, with rates typically around 17%. This is often attributed to having more established social bonds. However, loneliness can see a slight increase again in the “oldest old” age group (e.g., over 80), particularly due to factors like loss of loved ones, health issues, and mobility limitations.

Other Factors Influencing Loneliness:

  • Marital Status: Single adults are nearly twice as likely to report feeling lonely compared to married adults.
  • Income: Lower-income individuals often experience higher rates of loneliness.
  • Race/Ethnicity: Some studies indicate higher loneliness rates among certain racial and ethnic minority groups.
  • Health: Individuals with poorer physical and mental health, or those with disabilities, are more likely to experience loneliness.
  • Technology: While technology can connect people, many also feel it contributes to loneliness due to superficial interactions and constant social comparison.

It’s important to remember that loneliness is a subjective experience, and these statistics represent self-reported feelings across diverse populations. The COVID-19 pandemic significantly impacted loneliness levels, with initial increases, though some recent reports suggest a decline from pandemic peaks. The U.S. Surgeon General has even declared loneliness a public health epidemic.

Lonely individuals, seeking connection, may overshare deeply personal information with AI companions. This sensitive data, often stored on insecure platforms, creates significant privacy risks, making users vulnerable to data breaches, manipulation, and targeted exploitation by companies or malicious actors.

Robert Siciliano CSP, CSI, CITRMS is a security expert and private investigator with 30+ years experience, #1 Best Selling Amazon author of 5 books, and the architect of the CSI Protection certification; a Cyber Social Identity and Personal Protection security awareness training program. He is a frequent speaker and media commentator, and CEO of Safr.Me and Head Trainer at ProtectNowLLC.com.