Last Updated on
The world’s favourite AI chatbot has found an incredible variety of use cases since public release 8 months ago. It was only a matter of time before hackers turned this power to their own end. With reports of a Dark Web ChatGPT on the loose, how safe are we really?
What is the ‘Dark Web ChatGPT’ name?
WormGPT, the “blackhat alternative” to ChatGPT is fast gaining popularity among the clandestine forums of the dark web. In hacking terminology, whitehat hacking refers to the legal and above-board act of misusing software to detect faults before bad actors, typically funded by the company that created it. Blackhat hacking is intentionally, and by definition, bad for that company. Try “What is ChatGPT – and what is it used for?” or “How to use ChatGPT on mobile” for further reading on ChatGPT.
Essential AI Tools
Winston AI detector
Originality AI detector
What is ‘Dark Web ChatGPT’ used for?
The malicious activities of cyber criminals seemingly know no bounds. What we do know is that the advantages of a LLMs (Large Language Model) like the OpenAI chat bot lend themself to identity fraud, phishing attacks, social engineering scams, and writing malware code. In a recent blog post, Daniel Kelley of Cybersecurity firm Slashnext explains “The progression of artificial intelligence (AI) technologies, such as OpenAI’s ChatGPT, has introduced a new vector for business email compromise (BEC) attacks”. By allowing the automation of “highly convincing fake emails, personalised to the recipient”, the bad actor greatly increases their chances of a successful scam.
As Group-IB’s Head of Threat Intelligence Dmitry Shestakov points out, “Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”
Furthermore, “ChatGPT’s ability to draft highly authentic texts on the basis of a user prompt makes it an extremely useful tool for phishing purposes” warns European law enforcement agency Europol.
Are hacked ChatGPT accounts being sold on the dark web?
Cybersecurity firm Group-IB has confirmed that, since June 2022, over 100,000 login credentials of ChatGPT users have been leaked onto dark web marketplaces. The Asia-Pacific region (India and Pakistan) tops the list of most compromised accounts, followed by Brazil, Vietnam, Egypt, the United States, France, Indonesia, Morocco, and Bangladesh. Users in all regions should take steps to further protect their devices.
The name of the malicious software that enabled this data theft is Raccoon Infostealer. The good news is that your ChatGPT credentials will only have been compromised if you downloaded this software via email. The bad news is that info stealers (Vidar and Redline also contributing to this data breach) have become significantly more prevalent and easy to create in the past year.
How can you protect yourself from AI scams?
Password safety is more important than ever. Best practices for users of any AI (Microsoft Bing, Google Bard, OpenAI’s ChatGPT etc.) is to use different, strong passwords for each website and service. To facilitate this, many web users will allow their browser to remember passwords because, realistically, no one has the memory to achieve this without help.
Unfortunately, this does create a weak point, between the retrieval from server and submission of data, where your account credentials exist within your browser cookies. This weak point can even be exploited in cryptocurrency scams, where the underlying blockchain technology is faultless, but how you access it may not be. Consider two-factor authentication on any website or service you really care about. Shestakov of Group-IB does conclude that the data scraping was realistically the result of “commodity malware on people’s devices and not an OpenAI breach”.
You also should not submit any sensitive info to an AI language model. Researchers have found evidence of AI companies using user-submitted data from prompts to train their LLMs. As a result, companies including Google warn their employees to never submit sensitive data in a chatbot prompt.