The cyber security firm said the email attacks targeted thousands of its customers in January and February 2023, an increase that matches the adoption rate of ChatGPT.
The novel social engineering attacks use "sophisticated linguistic techniques," which Darktrace said include increasing text volume, sentence length, and punctuation in emails.
If you think about it, it makes sense. Nigeran Prince spam is suddenly getting better spelt and has improved sentence construction.
But Darktrace found a decrease in the number of malicious emails sent with an attachment or link, which suggests that generative AI, including ChatGPT, is being used by malicious actors to rapidly construct targeted attacks.
Survey results indicated that 82 per cent of employees are worried about hackers using generative AI to create scam emails indistinguishable from genuine communication. It found that a third of employees have fallen for a scam email or text in the past. Darktrace asked survey respondents what the top-three characteristics are that suggest an email is a phish and found:
- 68 per cent said it was being invited to click a link or open an attachment
- 61 per cent said it was due to an unknown sender or unexpected content
- Poor use of spelling and grammar was chosen by 61 per cent.
In the last six months, 70 per cent of employees reported an increase in the frequency of scam emails. Additionally, 79 per cent said that their organisation’s spam filters prevent legitimate emails from entering their inbox. 87 per cent of employees said they were worried about the amount of their personal information online, which could be used in phishing or email scams.