Why Does Bob From HR Need Your Email Password? Because Bob’s Not Bob.
Generative AI and the Deep Fake Problem.
Oct. 21, 2024
Your company’s employees are one important line of defense against cyber attacks, but without proper security awareness training, they can be the weakest one too – especially as generative AI and sophisticated deepfakes make scams harder to detect.
Old Tactics, New Technologies
Phishing email scams have been around for decades, and scammers have taken what works and made it more effective in the years since.
While it continues to happen today, many people first fell victim to email phishing scams in the early 2000s when they were a relatively new phenomenon. Tempted by a monetary reward or frightened by a legitimate claim from a seemingly trustworthy source, recipients fell right into these text traps, no matter how obviously unreal they seem.
The infamous “Nigerian Prince” emails came from someone claiming to be a foreign dignitary, usually from Nigeria, who needed help transferring a large sum of money out of the country. The scammer would promise a large reward in exchange for the victim’s assistance; but first, the victim had to provide money to cover “legal fees” or other related costs. Of course, no reward was ever received.
Other common scams included fake banking emails, lottery or prize winnings scams, account verification scams, fake virus alerts, IRS tax refund scams, job offer scams, and fake charity scams that typically followed major natural disasters.
Most of these phishing tactics are all-too familiar today – fueled by new technology and delivery mechanisms, scams have evolved into social engineering schemes that carry a higher cost for the victims and the companies they work for.
The human prince is now an AI generated deepfake – and he’s sounding a lot more convincing these days.
Stay Updated with Cybersecurity Insights
Scammers know that these days most people will no longer respond to a Nigerian Prince email. But when victims get smarter, scammers get craftier.
Generative AI can now take what once was a poorly-worded, human-generated email rife with grammatical errors and obviously from a foreign source, and turn it into a regionalized, professional, legitimate-sounding communication with few, if any, errors or inconsistencies.
Furthermore, deepfakes are a widely-used tactic to place convincing phone calls or even video calls, posing as a real
person known to the victim. These social engineering scams are designed to fool people into thinking they’re dealing with someone trustworthy and legitimate, making it all-the-more successful – and ultimately profitable – for the scammer.
Bad actors often perform reconnaissance before deploying a phishing scam, researching company employees and
breaking into databases to gain access to email addresses and phone numbers.
I want to use an example from a recent client engagement in which we conducted a red team assessment. For the penetration testing component of this assessment, we used compromised employee credentials to execute caller ID spoofing.
We called a help desk employee from a spoofed employee office number, pretended to be that person, and convinced them we had been locked out of our account and needed a new password. That’s all it takes for many attackers, and with deepfakes providing near-accurate voice or image likenesses, these scams become all the more effective.
At any point, your employees could receive a video call or phone call from Bob from HR asking them to share their account password to resolve an urgent issue. But Bob isn’t Bob – he’s a deepfake created by cybercriminals leveraging sophisticated, publicly available tools to gain access to your company’s sensitive information.
As these social engineering technologies become more accessible to scammers, their barrier to entry is narrowed, making sophisticated attacks easier to deploy and far more effective.
At any point, your employees could receive a video call or phone call from Bob from HR asking them to share their account password to resolve an urgent issue. But Bob isn’t Bob – he’s a deepfake created by cybercriminals to gain access to your company’s sensitive information.
Microsoft’s VASA-1 Research – Friend or Foe?
In our recent report, Expert Voices: Combat Cyber-Anxiety With More Powerful Security, we discuss Microsoft’s VASA-1 research, which the company describes as “an AI model” that produces “lifelike audio-driven talking faces generated in real time”.
Microsoft says VASA-1 research is intended to help tech players stay ahead in generative AI evolutions, but there have been concerns about the potential for misuse as VASA-1 makes it easier to create deepfakes.
VASA-1 remains under the responsible guard of Microsoft, which is reassuring, and while it is only in the research stage currently, it could fall into public use in the future. Microsoft has acknowledged that there is a potential for misuse, but emphasizes the benefits its research can achieve, such as enhancing educational equity, improving accessibility for individuals with vision or hearing impairment or other communication challenges, and providing therapeutic support or companionship to people in need.
For the good it can do, we know that if something can be exploited, it most likely will be – making the case for security awareness training all the more imperative.
As long as these convincing AI-generated deepfakes are out there, the responsibility for due diligence falls on companies, employees, and everyday citizens to be smart and aware.
Combat Cyber-Anxiety With More Powerful Security
How You Can Avoid the Scam
While we all probably like to think we’re smart enough to spot a scam, it should be clear by now that scammers are putting a lot of money, time, and effort into making sure you can’t.
Simply being paranoid about getting scammed and breached isn’t a realistic or effective option, so we recommend being proactive instead with these preventive methods for protecting your organization:
- Implement Multi-Factor Authentication (MFA): With MFA across all accounts, your employees get an additional layer of security – if a bad actor attempts to access one account, your employee will be notified on another device and prevent unauthorized access.
- Zero-Trust Policies: A zero-trust policy approach means every request for sensitive information is vetted and verified, regardless of who it (appears to have) come from.
- Encourage Vigilance: If employees feel empowered to question suspicious requests, they’re more likely to do so. Often, if a communication appears to be coming from within the organization, and employees don’t feel they can raise a hand, they won’t. Creating a culture of vigilance translates to preventive action.
- Regular Training: Conduct training annually, at a minimum, and give employees up-to-date information about the latest phishing and social engineering scams and techniques. These include how to spot deepfakes and AI-generated content.
Educating employees can mean the difference between falling for a social engineering scam and hanging up the phone. After our consultant successfully convinced the help desk worker to reset his password, we informed our client of a few easy ways to verify a phone call, even if the number is correct.
Call the number back, request a video call, or use an app-based MFA test that the user must respond to while on the call. These aren’t 100% foolproof methods depending on the level of access a bad actor has, but they are extra precautions that can easily stop an attacker from taking serious advantage of help desk employees and infiltrating private networks and information.
“DirectDefense is a thought partner, but they’re also in the trenches with your team – a partner and an extension of your cybersecurity posture. They’re so integrated with how you do things that it makes for a better outcome in making sure there is continuity in how you address risks and battle incoming threats.”
– VP and CIO
Marine Recreation & Technology Company
Invest in an MSSP
Investing in an MSSP is a great way to strengthen your overall security program and help prevent social engineering scams in the first place.
Where your resources to staff a 24/7 SOC may fall short, an MSSP can help you identify, monitor, investigate, take action, and analyze security vulnerabilities through customized and consultative services. An MSSP partnership won’t upend your existing security program – we enhance it by integrating with third-party investments and existing solutions.
As generative AI and deepfakes bring new sophistication to social engineering attacks, every organization is vulnerable, and the most effective prevention is investing in an MSSP and freeing up your staff to focus on implementing and operationalizing strategic organizational initiatives.
Talk to us about a managed security solution to strengthen your security posture – and help you combat generative AI and deepfakes.