Can ChatGPT crack passwords or encryption?

Can ChatGPT be used for cybercrime or white hat hacking?

Can ChatGPT crack password encryption?

Last Updated on

AI chatbots such as OpenAI’s ChatGPT, Google Bard, and Microsoft Bing Chat all have one thing in common – natural language processing (NLP). This type of artificial intelligence allows for a new way to process anything that comes out of your computer keyboard. Letters, numbers, and special characters alike form the passwords we rely on to protect our privacy, assets, and digital life. Those same characters are the ones that large language models (LLMs) are trained on. So, is there some way ChatGPT can crack passwords or encryption? Let’s look at some ways cybercriminals can use the AI chatbot for password cracking.

Can ChatGPT be used to hack? – AI hacking explained

AI hacking is a relatively new concept. It enables a new strategy for traditional hacking methods such as phishing emails, malware, and personal information or identity theft. This new strategy goes by the name of social engineering.

Essential AI Tools

Editor’s pick
EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.
Editor’s pick

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.
Editor’s pick
Only $0.00015 per word!

Winston AI detector

Winston AI: The most trusted AI detector. Winston AI is the industry leading AI content detection tool to help check AI content generated with ChatGPT, GPT-4, Bard, Bing Chat, Claude, and many more LLMs.
Only $0.01 per 100 words

Originality AI detector

Originality.AI Is The Most Accurate AI Detection.Across a testing data set of 1200 data samples it achieved an accuracy of 96% while its closest competitor achieved only 35%. Useful Chrome extension. Detects across emails, Google Docs, and websites.


10x Your Content Output With AI. Key features – No duplicate content, full control, in built AI content checker. Free trial available.
*Prices are subject to change. PC Guide is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Learn more

Not in itself a new concept by any means, but traditionally it would refer to humans manipulating other humans through psychology. The newness of it is in replacing the first instance of ‘human’ with ‘robot’.

An AI tool can pose as a human – writing convincing emails that invite the reader to click a link or reply voluntarily with private or personal information – and do so much much faster than a human hacker.

Another popular hacking method sped up greatly by artificial intelligence is ‘brute force’ password hacking.

It can take 2 seconds or 2 septillion years to brute force a password. This of course depends on the processing speed and bandwidth of the hackers hardware, but keep that as a constant, Passwarden provides an estimate for how long it will take to hack your password by modern standards:


You can speed this up by guessing the most likely passwords first – and that’s where PassGAN comes in.

PassGAN is a GAN style of AI. GAN, meaning Generative Adversarial Network, is a technology that can learn from the character distribution of real-world password leaks, eliminating the need to go through all possible combinations in unfiltered order. Can ChatGPT crack passwords? It may be the other, less well-known AI you need to worry about.

Can ChatGPT guess passwords?

ChatGPT cant guess passwords. To be clear, the GPT-4 model is certainly powerful enough, but OpenAI security measures ensure that the AI chatbot can’t be used for malicious purposes like phishing scams and brute force attacks.

While you may not be able to hack via simple prompts, the GPT model can be accessed through the ChatGPT API. Best practices dictate that you don’t attempt to script GPT-4 into a threat actor, as you will be banned by OpenAI employees.

Can ChatGPT crack encryption?

There are plenty of examples of internet users putting ChatGPT to this test. The answer, it seems, it sometimes. That said, an initial search through those examples is… not promising.

Reddit user SiaNage1 demonstrated that ChatGPT could not solve a simple shift cypher, while YouTube channel “Riddles, Codes, and Cyphers” explains the AI chatbots difficulty with a Caesar cypher.

Again, the answer is merely sometimes. Despite the immense knowledge and power of ChatGPT, it is intentionally restricted to prevent its role in phishing attacks, infostealer software, and other cyber attacks.

This doesn’t mean you can slack on those special characters, though. The dark web marketplace, however, is rife with alternative AI for cyber crime. Keep strong passwords, and never write them down digitally! Unauthorised access to accounts with common passwords doesn’t take an AI encryption tool – it barely takes a guess.