Home > AI

Can ChatGPT crack passwords or encryption?

Can ChatGPT be used for cybercrime or white hat hacking?

Reviewed By: Kevin Pocock

Last Updated on December 5, 2023
Can ChatGPT crack password encryption?
PC Guide is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Read More
You can trust PC Guide: Our team of experts use a combination of independent consumer research, in-depth testing where appropriate - which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

AI chatbots such as OpenAI’s ChatGPT, Google Bard, and Microsoft Bing Chat all have one thing in common – natural language processing (NLP). This type of artificial intelligence allows for a new way to process anything that comes out of your computer keyboard. Letters, numbers, and special characters alike form the passwords we rely on to protect our privacy, assets, and digital life. Those same characters are the ones that large language models (LLMs) are trained on. So, is there some way ChatGPT can crack passwords or encryption? Let’s look at some ways cybercriminals can use the AI chatbot for password cracking.

Can ChatGPT be used to hack? – AI hacking explained

AI hacking is a relatively new concept. It enables a new strategy for traditional hacking methods such as phishing emails, malware, and personal information or identity theft. This new strategy goes by the name of social engineering.

Essential AI Tools

Editor’s pick
Only $0.00019 per word!

Content Guardian – AI Content Checker – One-click, Eight Checks

8 Market leading AI Content Checkers in ONE click. The only 8-in-1 AI content detector platform in the world. We integrate with leading AI content detectors to give unparalleled confidence that your content appear to be written by a human.
EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.
TRY FOR FREE

WordAI

10x Your Content Output With AI. Key features – No duplicate content, full control, in built AI content checker. Free trial available.
TRY FOR FREE

Copy.ai

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.
TRY FOR FREE

Writesonic

Create SEO-optimized and plagiarism-free content for your blogs, ads, emails, and website 10X faster. Start for free. No credit card required.

Not in itself a new concept by any means, but traditionally it would refer to humans manipulating other humans through psychology. The newness of it is in replacing the first instance of ‘human’ with ‘robot’.

An AI tool can pose as a human – writing convincing emails that invite the reader to click a link or reply voluntarily with private or personal information – and do so much much faster than a human hacker.

Another popular hacking method sped up greatly by artificial intelligence is ‘brute force’ password hacking.

It can take 2 seconds or 2 septillion years to brute force a password. This of course depends on the processing speed and bandwidth of the hackers hardware, but keep that as a constant, Passwarden provides an estimate for how long it will take to hack your password by modern standards:

[TABLE]

You can speed this up by guessing the most likely passwords first – and that’s where PassGAN comes in.

PassGAN is a GAN style of AI. GAN, meaning Generative Adversarial Network, is a technology that can learn from the character distribution of real-world password leaks, eliminating the need to go through all possible combinations in unfiltered order. Can ChatGPT crack passwords? It may be the other, less well-known AI you need to worry about.

Can ChatGPT guess passwords?

ChatGPT cant guess passwords. To be clear, the GPT-4 model is certainly powerful enough, but OpenAI security measures ensure that the AI chatbot can’t be used for malicious purposes like phishing scams and brute force attacks.

While you may not be able to hack via simple prompts, the GPT model can be accessed through the ChatGPT API. Best practices dictate that you don’t attempt to script GPT-4 into a threat actor, as you will be banned by OpenAI employees.

Can ChatGPT crack encryption?

There are plenty of examples of internet users putting ChatGPT to this test. The answer, it seems, it sometimes. That said, an initial search through those examples is… not promising.

Reddit user SiaNage1 demonstrated that ChatGPT could not solve a simple shift cypher, while YouTube channel “Riddles, Codes, and Cyphers” explains the AI chatbots difficulty with a Caesar cypher.

Again, the answer is merely sometimes. Despite the immense knowledge and power of ChatGPT, it is intentionally restricted to prevent its role in phishing attacks, infostealer software, and other cyber attacks.

This doesn’t mean you can slack on those special characters, though. The dark web marketplace, however, is rife with alternative AI for cyber crime. Keep strong passwords, and never write them down digitally! Unauthorised access to accounts with common passwords doesn’t take an AI encryption tool – it barely takes a guess.

Steve is the AI Content Writer for PC Guide, writing about all things artificial intelligence. He currently leads the AI reviews on the website.