An AI-generated malware dubbed BlackMamba was able to bypass cybersecurity technologies such as industry leading EDR (Endpoint Detection and Response) in an experimental project led by researchers at Hyas.
While the BlackMamba malware was only tested as a proof-of-concept and does not live in the wild, its existence does mean that the threat landscape for individuals and for organizations will be unequivocally changed by the use of AI.
“Since a platform like ChatGPT can simulate human-like responses, it can be used to trick people into divulging sensitive information or clicking on malicious links.”- Shomiron Das Gupta, Cybersecurity Entrepreneur and Threat Analyzer
Cybersecurity providers already take advantage of AI to detect unusual data patterns within a network and discover cyberattacks.
In fact, organizations taking advantage of this technology have a “74-day shorter breach life cycle,” according to an IBM study. This means that AI and automation help stop a breach before it does incremental damage.
With the popularization of AI tools such as ChatGPT, which are able to generate code, malicious actors have more capabilities to create different types of attacks. The following are prominent examples of attacks powered by AI.
AI-Generated Videos: Malware Spread Through YouTube
AI is also helping cybercriminals deliver malware through trusted social media platforms such as YouTube. Malicious actors are creating AI-generated videos that appear to be tutorials to popular software programs like Photoshop, Premiere Pro, and others.
The description section of these videos offers viewers a free version of these otherwise expensive tools, tempting them to click on links that spread stealer malware.
Stealer malware works by infecting a system and stealing data from it. Data such as login usernames and passwords are taken from the target computers and sent back to cybercriminals.
To prevent such attacks on your organization’s network, businesses should implement employee cybersecurity training. When your workforce is aware of the dangers of clicking malicious links or downloading illegal software, they will be prepared to avoid those dangers.
AI-Powered Phishing Attacks: More Enticing Lures
Another use bad actors have found for AI is creating extremely targeted phishing emails to lure recipients into clicking malicious links or downloading malware.
Usually, cybercriminals use information people post about themselves on social media or data acquired from a breach to craft emails that seem to come from a trusted source.
The emergence and the increasingly widespread use of language AI tools like ChatGPT allow bad actors to insert someone’s personal or company data into the AI and ask it to create an email for that user.
Since the AI is a great tool for writing emails and works at a much faster speed than a human could while still appearing to be written by a person, phishing attacks improve not only in accuracy but also in speed.
Prompt your employees to check any emails coming from an external source against phishing red flags and to report it to your IT or cybersecurity team. Additionally, enforcing multi-factor authentication (MFA) across your company will ensure that accounts are secure even if credentials are exposed.