Ferris Bueller famously said, “Life moves pretty fast. If you don’t stop to look around once in a while, you could miss it.” CISOs, CIOs and other IT leaders grappling with the explosion of Social Engineering – the new attack surface favored by cyber criminals seeking to breach the data of all kinds of organizations – know this is true. Even as many InfoSec leaders scramble to catch up to the Social Engineering threat, new technology is making their challenge even more difficult. Artificial intelligence or “AI” is making a difficult defense even more problemtic.
Whether to extort ransom via ransomware attacks, steal intellectual property or simply to gather personally identifiable information (PII) to sell to others, Social Engineering has become the method of choice for hackers to gain unauthorized entry to protected data stores. Read this Privacy Bee White Paper on the risks of social engineering. The strategy is super effective, which explains why the bad guys have been leaning heavily into this criminal method. To boil it down though, social engineering relies on tapping into the PII of employees within a target organization and using the details to generate solicitations – spoofed emails, phony SMS texts, etc. – that seem so legit, they trick the recipient into divulging sensitive information that can compromise security.
KnowBe4, a leader in security awareness training for corporate organizations shares research revealing a 135% increase in novel social engineering attacks in January and February 2023 harnessing generative AI to produce solicitations far more difficult to detect than those produced without AI. Before the application of AI, the attack messages created by cyber criminals were easier to detect. Poor grammar, spelling and stilted context were more obvious when spear phishing messages were drafted by hackers in non-English speaking countries. With the power of AI platforms like ChatGPT and others, malicious campaigns are much more believable as AI produces language that is nearly indistinguishable from legitimate communications.
It is not just the quality of the attack message creation that makes AI a serious threat. It is also the speed at which AI can generate spear phishing and whaling attacks. What used to take hours of research and custom crafting messages to try and trick employees can now be automated at massive scale. So super high-end attacks to everyone (not just a select few execs).
The anecdotal evidence of the rise of novel social engineering attacks fueled by AI is convincing too. The reported 135% increase noted above correlates to the adoption rate of ChatGPT in the first two months of 2023. At the same time, there’s been a corresponding decrease in the number of malicious emails sent with a link or attachment. It appears that as the quality of spoofed messages improves, there is less need to compel a target to click a phony link or open an infected attachment.
Clicking links and opening attachments are already risks organizations and their employees are aware of. According to a survey of corporate sector employees done by cyber security firm, DarkTrace, the top three characteristics suggesting an email may be an attempt at spear phishing are:
- Being invited to click a link or open an attachment (68%)
- Receiving a message from an unknown or unexpected sender (61%)
- Poor grammar, spelling, and sentence structure (61%)
Unfortunately, ChatGPT and other AI platforms are being used by threat actors to produce phishing attacks that avoid all three of these identifiers.
AI is also being applied to other attack vectors. Beyond the generative AI (like ChatGPT) used to generate effective phishing messages, AI can also be used to produce so-called “deep fakes”. Deep fakes are videos of what appears to be a corporate CEO whose likeness is manipulated by AI tech to send instructions to employees directing the release of information or even authorize payments to accounts maintained by the attackers.
With InfoSec leaders already behind the eight ball on addressing Social Engineering and the embrace of AI by bad guys to boost their advantage, organizations are even further behind than before when it comes to protecting against data breaches and other cyber attacks. The good news according to data privacy expert, Blanton Jimerson of Privacy Bee, is that there are solutions for protecting data privacy as innovative as the emerging AI threat.
To move at the speed of cyber criminals and their new AI secret weapon, Jimerson recommends engaging the preventative and prophylactic measures offered by the Privacy Bee data privacy platform. Cyber security solutions – whether software or behavior/training based – are not enough by themselves to keep the extensive volumes of external data (the PII of every single employee and vendor serving an organization) from being exploited by hackers. Especially those using AI to automate social engineering processes.
“The real challenge” says Jimerson, “is how to remove all the external data on an entire organizations’ workforce from the numerous sources where it is available on the internet. There are hundreds (and growing) known Data Brokers and People Search Sites where PII is available for sale to anyone with a few dollars and bad intent.” (Learn more about this threat in Privacy Bee white paper, “Exposing the Threat to Data Privacy Posed by Data Brokers and People Search Sites”). Then there are public sources like corporate websites, social media profiles, search engines and other public records sites. Hackers rely heavily on these pools of PII to craft their spear phishing attacks. AI makes the task even easier. “The only solution is to identify where the preponderance of the external data is out there and then removing it from the internet” says Jimerson.
That is precisely what Privacy Bee does! With tools for scanning the entire web and building risk profiles for an organization (including all its employees and vendors), Privacy Bee shows client organizations where the unsecured data rests. These scans and profile building tools are free to use! Of course, an organization can then engage its internal resources to manage the process of requesting and verifying/enforcing the removal from hundreds of sites. Or, they can engage Privacy Bee services to perform this labor intensive legwork on their behalf.
With things moving as quickly as Ferris Bueller warned, organizations should engage in aggressive external data privacy management to neutralize the emerging AI threat.