WormGPT: Unethical AI Use for Cyberattacks

WormGPT banner, advertisment, ad

Introduction

Artificial Intelligence systems, or large language models (LLMs), are undoubtedly helpful tools that can be utilised to reduce labour.

In an ideal world, users of these tools would only operate ethically. However, when something with such huge capability and scalability is created, the impact of exploitation is a concern that must be evaluated.  New technology is inherently unregulated, heightening concerns over exploitation, as industry and government try to keep pace without stifling innovation.

Malware Creation

Until recent years, the creation of effective malware has been a task that only those with expert knowledge of coding could take on. With the birth of machine learning systems such as ChatGPT, it is now possible for those with zero coding knowledge to cleverly word prompts and build malware tailored to their objectives.

The majority of chatbots use filters and blockers to prevent misuse, as seen below.

Exploitation

Those who are determined can make use of loopholes to bypass these content filters. For example, the use of OpenAI API without ChatGPT allows for unfiltered prompts and answers.

However, after multiple and consistent violations of content policy, OpenAI will terminate the user’s account.

This is where a team on hackforums.net found a gap in the market and created WormGPT.

WormGPT:

WormGPT has been in development since March of this year. It was released to the public at the end of June and is available at the price of $550 yearly or $100 monthly.

It is advertised as the best GPT alternative for black hat hackers, and allegedly features:

  • Zero legal or ethical restrictions.
  • Privacy focussed, entirely anonymous.
  • Payment by cryptocurrency.
  • Uses the large language model of GPTJ and is trained in areas of interest to black hat hackers.

WormGPT Usage Examples

This bot can be used to create malicious code, as demonstrated below. The developers intended for it to be used in this manner.

The bot can also answer questions that allow the user to troubleshoot malicious code.

The creation of WormGPT foreshadows the potential for an environment where basic hacking becomes widespread and easily accessible, as machine learning develops.

In this example below, the user is trying to identify a way elevating their permission levels to gain access to systems that contain valuable information.

This bot will enable low-skilled cybercriminals and reveals the potential for AI-powered cybercrime. As machine learning continues to progress, so will these programs.

Summary

It is important to realise that misuse and exploitation are problems that are inherent to all new technologies. This fact means that organisations must be proactive in their approach to cybersecurity as opposed to reactive. Relying on government intervention or regulation is not enough, as cyber security beyond the bare essentials, needs to be tailored to your organisation. There are clearly groups actively aiming to exploit technological innovation for their personal gain, with disregard for the law and with a skewed moral compass.

It is ultimately the responsibility of the organisation to protect the data of its customers and its employees. This is not something that can be dealt with subsequently to an attack, as the damage is already done. Vulnerabilities must be found and dealt with in a timely, proactive manner.

 

To understand how Melius Cyber can protect your organisation please book a no-obligation consultation here.

Share this post:

Other posts you may be interested in

Book a free consultation with a Cyber Safe expert

Our experts can discuss penetration testing, web or mobile application pen testing, continuous protection, and all levels of certification.

Contact our cyber team to discuss any and all of your cyber needs.
Fill in the form below and one of the Cyber Safe experts will be in touch.
Contact
  • Newcastle office: 0191 249 3003
  • London office: 0203 793 9679