The cybersecurity company SlashNext decries the increased cybersecurity cases amid the influx of artificial intelligence (AI) models. The company’s chief, Patrick Harr, attributes the increased cybersecurity concerns to evolving technology facilitated by generative artificial intelligence.

The cybersecurity expert regrets that the generative AI has both transformed and disoriented everyday life aspects. Harr acknowledges cybercriminals are leveraging AI to devise new attack mechanisms. 

Harr considers increased phishing scams using this technology to be an unintended consequence of the technology. A report by SlashNext, a cybersecurity company, claims that since ChatGPT’s launch, phishing emails have risen by 1265%.

SlashNext CEO Claims that ChatGPT Triggers Rise of Phishing Emails

 In addition to the creation of malware-developing artificial intelligence tools such as Dark Bart, WormGPT, and FraudGPT, which are now prevalent on the dark web, cybercriminals are also establishing new means to escape from OpenAI’s leading artificial intelligence chatbot.

Patrick Harr, SlashNext’s chief, claimed that ChatGPT’s release in February significantly increased the number of phishing attacks. The attacks’ high numbers were partially linked to the successful adoption and widespread utilization of ChatGPt.  

Phishing attacks concern cyberattacks in the form of texts, emails, or messages on social media that seem to originate from trustworthy sources. Additionally, the attacks can be made to direct victims to malevolent sites that dupe them into signing transactions with their crypto wallet that later steal their money.

SlashNext’s report indicates that in the last quarter of last year, the number of daily phishing attacks was 31000, representing a credential phishing increase of 967%. The report also showed that 68% of the phishing attacks are text-founded business email compromises (BEC), while 39% of mobile-based attacks occurred through SMS phishing.

Impact of Generative AI Tools

Harr said that despite some debates concerning the real impact of generative artificial intelligence on cybercriminal activity, their research shows that threat actors are writing complex and targeted business email compromises, as well as other phishing messages, using tools such as ChatGPT. 

Harr revealed that at its center, these are link-associated attacks aimed at compelling one to provide their username, credentials, and password. He also noted that phishing attacks can result in the installation of more persistent ransomware. 

The Colonial Pipeline credential attack entailed attackers gaining access to people’s passwords and usernames. As cyber criminals target victims using generative artificial intelligence, Harr stressed the need for cybersecurity experts to attack artificial intelligence using artificial intelligence. 

Harr claimed that these firms could include the technology directly into their security programs to search their messaging channels and continuously eradicate threats. That is precisely the reason behind the utilization of generative artificial intelligence for detection and not just blocking. 

It is also utilized to forecast how the next one will occur. Despite Harr being hopeful concerning AI’s capability to capture malicious AIs, he admits that commanding ChatGPT to watch out for threats is not enough.

 Harr believes that an equivalent of a private large language model (LLM) app that is tailored to watch out for malicious threats is essential.

Factors Contributing to Phishing Scam

Despite artificial intelligence developers such as Anthropic, Midjourney, and OpenAI incorporating safeguards against using their platforms for malicious purposes, such as spreading fabrications and phishing attacks, accomplished users are dedicated to evading them.

 Besides, researchers have also established that using less commonly tested languages, such as Gaelic and Zulu, can aid in hacking ChatGPT and getting the chatbot to clarify how to avoid a penalty after stealing from a store.

In September, OpenAI introduced Red Teams, an open call to offensive cybersecurity experts, to aid in establishing security gaps in its artificial intelligence models. 

Harr concluded that firms should reconsider their security postures. In this case, they must utilize AI-founded tools to identify and respond to these problems. Additionally, they can block and stop before taking action.

Michael Scott

By Michael Scott

Michael Scott is a skilled and seasoned news writer with a talent for crafting compelling stories. He is known for his attention to detail, clarity of expression, and ability to engage his readers with his writing.