Home > Apps

‘Dark Web ChatGPT’ – Is your data safe?

With AI-powered scams on the rise, how can you keep your digital life safe from dark web ChatGPT?
Last Updated on July 31, 2023
Dark web ChatGPT
You can trust PC Guide: Our team of experts use a combination of independent consumer research, in-depth testing where appropriate - which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

The world’s favourite AI chatbot has found an incredible variety of use cases since public release 8 months ago. It was only a matter of time before hackers turned this power to their own end. With reports of a Dark Web ChatGPT on the loose, how safe are we really?

What is the ‘Dark Web ChatGPT’ name?

WormGPT, the “blackhat alternative” to ChatGPT is fast gaining popularity among the clandestine forums of the dark web. In hacking terminology, whitehat hacking refers to the legal and above-board act of misusing software to detect faults before bad actors, typically funded by the company that created it. Blackhat hacking is intentionally, and by definition, bad for that company. Try “What is ChatGPT – and what is it used for?” or “How to use ChatGPT on mobile” for further reading on ChatGPT.

Essential AI Tools

Editor’s pick
Only $0.00019 per word!

Content Guardian – AI Content Checker – One-click, Eight Checks

8 Market leading AI Content Checkers in ONE click. The only 8-in-1 AI content detector platform in the world. We integrate with leading AI content detectors to give unparalleled confidence that your content appear to be written by a human.
EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.


10x Your Content Output With AI. Key features – No duplicate content, full control, in built AI content checker. Free trial available.


Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.


Create SEO-optimized and plagiarism-free content for your blogs, ads, emails, and website 10X faster. Start for free. No credit card required.

What is ‘Dark Web ChatGPT’ used for?

The malicious activities of cyber criminals seemingly know no bounds. What we do know is that the advantages of a LLMs (Large Language Model) like the OpenAI chat bot lend themself to identity fraud, phishing attacks, social engineering scams, and writing malware code. In a recent blog post, Daniel Kelley of Cybersecurity firm Slashnext explains “The progression of artificial intelligence (AI) technologies, such as OpenAI’s ChatGPT, has introduced a new vector for business email compromise (BEC) attacks”. By allowing the automation of “highly convincing fake emails, personalised to the recipient”, the bad actor greatly increases their chances of a successful scam.

As Group-IB’s Head of Threat Intelligence Dmitry Shestakov points out, “Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”

Furthermore, “ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes” warns European law enforcement agency Europol.

Are hacked ChatGPT accounts being sold on the dark web?

Cybersecurity firm Group-IB has confirmed that, since June 2022, over 100,000 login credentials of ChatGPT users have been leaked onto dark web marketplaces. The Asia-Pacific region (India and Pakistan) tops the list of most compromised accounts, followed by Brazil, Vietnam, Egypt, the United States, France, Indonesia, Morocco, and Bangladesh. Users in all regions should take steps to further protect their devices.

The name of the malicious software that enabled this data theft is Raccoon Infostealer. The good news is that your ChatGPT credentials will only have been compromised if you downloaded this software via email. The bad news is that info stealers (Vidar and Redline also contributing to this data breach) have become significantly more prevalent and easy to create in the past year.

How can you protect yourself from AI scams?

Password safety is more important than ever. Best practices for users of any AI (Microsoft Bing, Google Bard, OpenAI’s ChatGPT etc.) is to use different, strong passwords for each website and service. To facilitate this, many web users will allow their browser to remember passwords because, realistically, no one has the memory to achieve this without help.

Unfortunately, this does create a weak point, between the retrieval from server and submission of data, where your account credentials exist within your browser cookies. This weak point can even be exploited in cryptocurrency scams, where the underlying blockchain technology is faultless, but how you access it may not be. Consider two-factor authentication on any website or service you really care about. Shestakov of Group-IB does conclude that the data scraping was realistically the result of “commodity malware on people’s devices and not an OpenAI breach”.

You also should not submit any sensitive info to an AI language model. Researchers have found evidence of AI companies using user-submitted data from prompts to train their LLMs. As a result, companies including Google warn their employees to never submit sensitive data in a chatbot prompt.

Steve is an AI Content Writer for PC Guide, writing about all things artificial intelligence. He currently leads the AI reviews on the website.