AI chatbots such as OpenAI’s ChatGPT, Google Bard, and Microsoft Bing Chat all have one thing in common – natural language processing (NLP). This type of artificial intelligence allows for a new way to process anything that comes out of your computer keyboard. Letters, numbers, and special characters alike form the passwords we rely on to protect our privacy, assets, and digital life. Those same characters are the ones that large language models (LLMs) are trained on. So, is there some way ChatGPT can crack passwords or encryption? Let’s look at some ways cybercriminals can use the AI chatbot for password cracking.
Can ChatGPT be used to hack? – AI hacking explained
AI hacking is a relatively new concept. It enables a new strategy for traditional hacking methods such as phishing emails, malware, and personal information or identity theft. This new strategy goes by the name of social engineering.
Essential AI Tools
Not in itself a new concept by any means, but traditionally it would refer to humans manipulating other humans through psychology. The newness of it is in replacing the first instance of ‘human’ with ‘robot’.
An AI tool can pose as a human – writing convincing emails that invite the reader to click a link or reply voluntarily with private or personal information – and do so much much faster than a human hacker.
Another popular hacking method sped up greatly by artificial intelligence is ‘brute force’ password hacking.
It can take 2 seconds or 2 septillion years to brute force a password. This of course depends on the processing speed and bandwidth of the hackers hardware, but keep that as a constant, Passwarden provides an estimate for how long it will take to hack your password by modern standards:
[TABLE]
You can speed this up by guessing the most likely passwords first – and that’s where PassGAN comes in.
PassGAN is a GAN style of AI. GAN, meaning Generative Adversarial Network, is a technology that can learn from the character distribution of real-world password leaks, eliminating the need to go through all possible combinations in unfiltered order. Can ChatGPT crack passwords? It may be the other, less well-known AI you need to worry about.
Can ChatGPT guess passwords?
ChatGPT cant guess passwords. To be clear, the GPT-4 model is certainly powerful enough, but OpenAI security measures ensure that the AI chatbot can’t be used for malicious purposes like phishing scams and brute force attacks.
While you may not be able to hack via simple prompts, the GPT model can be accessed through the ChatGPT API. Best practices dictate that you don’t attempt to script GPT-4 into a threat actor, as you will be banned by OpenAI employees.
Can ChatGPT crack encryption?
There are plenty of examples of internet users putting ChatGPT to this test. The answer, it seems, it sometimes. That said, an initial search through those examples is… not promising.
Reddit user SiaNage1 demonstrated that ChatGPT could not solve a simple shift cypher, while YouTube channel “Riddles, Codes, and Cyphers” explains the AI chatbots difficulty with a Caesar cypher.
Again, the answer is merely sometimes. Despite the immense knowledge and power of ChatGPT, it is intentionally restricted to prevent its role in phishing attacks, infostealer software, and other cyber attacks.
This doesn’t mean you can slack on those special characters, though. The dark web marketplace, however, is rife with alternative AI for cyber crime. Keep strong passwords, and never write them down digitally! Unauthorised access to accounts with common passwords doesn’t take an AI encryption tool – it barely takes a guess.