What is WormGPT? The new AI behind the cyberattacks
In recent news, a dangerous AI tool named WormGPT has been gaining popularity on cybercrime forums within the dark web. Marketed as a “sophisticated AI model,” WormGPT is specifically designed to generate human-like text for hacking campaigns, enabling cybercriminals to execute attacks on an unprecedented scale.
According to cybersecurity expert Daniel Kelley, who shared his findings on the platform Slashnext, WormGPT was trained on a diverse range of data sources, with a particular emphasis on malware-related data. This training allows the AI tool to create text that can be utilized for various malicious activities.
The implications of WormGPT’s emergence are concerning for everyday internet users and businesses alike. One of the key issues lies in the speed and volume of scams that a language model like this can generate simultaneously.
The rapid text generation capability of AI models, combined with WormGPT’s malicious intent, poses a significant threat. Cyberattacks such as phishing emails can now be replicated easily, even by those with minimal cybercriminal skills.
Adding to the danger is the promotion of “jailbreaks” on ChatGPT, a similar AI language model by OpenAI, which essentially allows for the manipulation of prompts and inputs to create harmful content or reveal sensitive information. The consequences of such manipulation can be severe, leading to potential data breaches, inappropriate content dissemination, and the development of harmful code.
Also Read: The Future of AI: How Artificial Intelligence Will Change Future
Kelley pointed out that generative AI, like WormGPT, can produce emails with impeccable grammar, making them appear legitimate and decreasing the chances of being flagged as suspicious. This democratizes the execution of sophisticated Business Email Compromise (BEC) attacks, providing access to powerful hacking tools for a broader spectrum of cybercriminals, including those with limited technical expertise.
While companies such as OpenAI ChatGPT and Google Bard are actively working to combat the misuse of large language models (LLMs), there are concerns about the capabilities of these countermeasures.
A recent report by Check Point highlighted that Bard’s anti-abuse restrictors in the realm of cybersecurity are significantly lower compared to ChatGPT, making it easier to generate malicious content using Bard’s capabilities.
The introduction of WormGPT to the dark web follows a disconcerting trend. Researchers from Mithril Security recently revealed their successful modification of an existing open-source AI model named PoisonGPT, aimed at spreading disinformation. The potential consequences of such AI technology are still largely unknown.
As AI has already demonstrated the ability to generate and spread disinformation, manipulate public opinion, and even influence political campaigns, the emergence of bootleg AI models like WormGPT only exacerbates the risks faced by unsuspecting users.
In conclusion, the rise of WormGPT on the dark web signifies a troubling development in the world of cybercrime. The ease with which this AI tool can generate realistic and malicious content poses a significant threat to cybersecurity.
As cyber threat actors find new ways to exploit AI technology, it becomes crucial for AI developers and cybersecurity experts to remain vigilant and take proactive measures to safeguard against potential abuses of AI language models.
Additionally, internet users and organizations must stay informed about these developments and implement robust security measures to protect themselves from the ever-evolving landscape of cyber threats.
I am a law graduate from NLU Lucknow. I have a flair for creative writing and hence in my free time work as a freelance content writer.