Home > Apps

Is ChatGPT safe? Security and privacy concerns

We outline the risks of OpenAI's highly popular AI chatbot

Reviewed By: Steve Hook

Last Updated on April 5, 2024
Image shows the ChatGPT logo on a green background below the PC guide logo
You can trust PC Guide: Our team of experts use a combination of independent consumer research, in-depth testing where appropriate - which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

ChatGPT has seen some of the most explosive growth of any web service ever created. AI chatbots such as OpenAI’s ChatGPT, Google Gemini, and Microsoft copilot, are now the most accessible forms of artificial intelligence on earth. Still, few truly understand the natural language processing technology that makes it tick, or the systemic single-point failure risk this may already be posing to our national cybersecurity. With such mainstream adoption, including integration into the services we use in our day-to-day lives, it’s only sensible to ask the question: is ChatGPT safe?

Quick Answer

ChatGPT is safe to a certain degree. With any platform that can gather personal data, there are risks. On top of this, ChatGPT raises concerns about issues of misinformation and misuse.

Is ChatGPT safe?

Artificial intelligence and any generative AI tools in a casual sense seem to be safe. For example, if you are just looking to generate creative content, ask the bot to translate text, or simply just want to play around with it, you run little risk. Chats with any AI language model are intended to be helpful and harmless experiences, with security measures abound. These platforms, such as Copilot from Microsoft, Gemini from Google, and of course ChatGPT from OpenAI still have their vulnerabilities, however.

ChatGPT safety concerns

When addressing ChatGPT’s safety online, we mainly need to recognize issues of misuse, misinformation, and data security.


A major concern about the AI bot is its potential to give inaccurate information. In a world where misinformation, false information, and fake news can spread quickly online, this could potentially be extremely harmful. ChatGPT is a large language model, that constructs its response using the information it was trained on, wherein some is sourced from the internet. The bot then creates a string of words that are likely to follow each other and then outputs its response. As a result, releasing incorrect information is pretty inevitable. Without fact-checking, this can be a dangerous feature that could lead to prejudice, bias, and misinformation.

Similar concerns have led other tech giants, like Metaverse, to keep their AI bots out of public use. Interestingly, many opinions are flying around suggesting that OpenAI is irresponsible for releasing the model to the public considering these major limitations.


ChatGPT is a powerful tool, that in the wrong hands could have detrimental effects. Due to its speed in generating code, it’s become quite the sort-after tool for programmers and even hackers. Cybercriminals can simply ask ChatGPT to generate detailed instructions on how to hack a computer, which can be a harmful tool if paired with advanced programming skills.

ChatGPT has the potential to be used by attackers to trick and target you and your computer. For example, fraudsters could use ChatGPT to quickly create spam and phishing emails. Due to the vast amount of data, the model is trained on, it has now become easier than ever to create scarily convincing emails even in the style of the company they’re trying to pose as.

OpenAI has also made a variation of their model, free to modify from their GitHub account. Despite this being a great idea for those looking to learn more about NLP models and AI, it also means that people with malicious intent can use the model for their own gains. We cannot ignore the possibility that someone could use OpenAI’s technology to create a fake customer service chatbot. This could have the potential to trick people out of their money – not great news.

Data security risks

As of the April 1st, 2024 update, you can use ChatGPt without an account, however, to access ChatGPT Plus you’ll need to sign in. Signing in requires personal information, including your name, email, address, phone number, and bank details (if on a paid subscription). This means OpenAI has access to this data and subsequently puts you in danger of a data breach.

In addition to OpenAI collecting your personal data, it also collects your chat history with ChatGPT. This doesn’t usually pose a threat as you are the only person that has the ability to access previous conversations. However, during the ChatGPT 9-hour outage in March 2023, OpenAI stated that unauthorized users were able to access the personal information of other accounts, due to a bug in the system. This information included the beginning of other user’s conversations, account details, and some payment information.

Essential AI Tools

Editor’s pick
Only $0.00019 per word!

Content Guardian – AI Content Checker – One-click, Eight Checks

8 Market leading AI Content Checkers in ONE click. The only 8-in-1 AI content detector platform in the world. We integrate with leading AI content detectors to give unparalleled confidence that your content appear to be written by a human.
EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.


10x Your Content Output With AI. Key features – No duplicate content, full control, in built AI content checker. Free trial available.


Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.


Create SEO-optimized and plagiarism-free content for your blogs, ads, emails, and website 10X faster. Start for free. No credit card required.

ChatGPT safety discussed at the UK AI Safety Summit

These vulnerabilities have been the topic of much debate at the inaugural AI Safety Summit, held at Bletchley Park, UK. The summit, organized by UK Prime Minister Rishi Sunak brought world leaders and tech executives together to discuss how to mitigate the risks of AI which we, frankly, have yet to fully understand. To help us understand the matter on a deeper level, we recently spoke with our AI correspondent Dr Matthew Shardlow about the event.

As with any technology, AI/AGI has the potential for great good and great harm. AI can be misused by those who misunderstand or misapply it. Better education about the limitations of AI can help to combat its misuse.

Dr Matthew Shardlow – PC Guide interview

The summit began two days after US President Joe Biden issued an AI Executive Order, bringing similar safeguards across the Atlantic. As the most searched term of 2023, it’s clear that AI is here to stay, and as a result, all world governments are taking steps to ensure that the capability of any AI model (even an innocuous AI chatbot) does not spiral out of hand.

Even ChatGPT developer OpenAI, led by CEO Sam Altman, admits in its article on the chatbot, that the chatbot does have the potential to produce biased and harmful content. Such a concern is not unique to any one chatbot, or artificial intelligence for that matter.

ChatGPT IBM research

Chenta Lee, an AI researcher at IBM, was a member of a group tasked with inducing “hypnosis” in large language models (including ChatGPT). Reporting via Security Intelligence, Lee claims they were able to:

“get LLMs to leak confidential financial information of other users, create vulnerable code, create malicious code, and offer weak security recommendations. The IBM-owned blog equates the English language to a “programming language” for NLP malware, “attackers no longer need to rely on Go, JavaScript, Python, etc., to create malicious code,”

Chenta Lee – AI researcher at IBM

This creates an unsafe information source for users of the LLM. The worst part is that it’s not even difficult or expensive to replicate. The technique IBM used is a much less technologically demanding alternative to its spiritual predecessor, data poisoning. Data poisoning is the injection of malicious data into a dataset, such that the output of the system uses the malicious data unbeknownst to the system itself. AI is, architecturally, a perfect target for this. Given the power, widespread use, and B2B integration of GPT-4 in today’s services, an attack of this kind is very tempting for hackers. It’s also exemplary of the danger of integrating AI into every part of our daily lives, at risk of unpredictable simultaneous failure.

Lee goes into further detail:

“The best-performing LLM that we hypnotized was GPT… So how did we hypnotize the LLMs? By tricking them into playing a game: the players must give the opposite answer to win the game.”

Chenta Lee – AI researcher at IBM
Chenta Lees' ChatGPT hypnosis prompt.
Chenta Lees’ ChatGPT hypnosis prompt.

This research, combined with popular ChatGPT jailbreaks such as ‘DAN’, shows conclusively that ChatGPT can be hypnotized, which poses a threat to users of the app.

Does ChatGPT save your chats?

Yes, ChatGPT saves your chats for your benefit. You can log into your OpenAI account from a new device and recall your previous conversations with the bot from the list on the left. This doesn’t mean that you can safely include anything in those chats. You’ll of course need to abide by laws and the company’s own terms of use when sharing data.

In terms of cyber safety, we’ve seen how information about users’ chat history has been accessed by unauthorized users before in the March 2023 data breach. Therefore, there is no guarantee that your chat history is 100% inaccessible, and for your own safety, data should not be sensitive or personally identifiable.

Is it safe to give ChatGPT your phone number?

It’s important to note, using your phone number to register on ChatGPT isn’t giving your phone number to ChatGPT – the service isn’t the same as the company (OpenAI). If you are concerned about data or information OpenAI collects be sure to read the company’s privacy policy.

Of course, giving any company your number can have a small element of risk associated with it in terms of cyber security. If there is a data or security breach, any data or confidential information a company has access to may be a target. But, as we discuss elsewhere you do need a phone number to access a ChatGPT account.

Final thoughts

There’s no doubt about it – ChatGPT is a pretty phenomenal AI technology. However, the AI bot could cause real-world harm. The fact that the model has the potential to spread misinformation and produce biased content is something that should not be ignored.

As we continue to build a digital world around us, the threat from this rises. So what can you do to protect yourself? Well, firstly you should fact-check any information ChatGPT outputs by also doing your own research. Also, regardless of what ChatGPT’s response is, always have in the back of your mind that it is not necessarily true or correct.

Funmi joined PC Guide in November 2022, and was a driving force for the site's ChatGPT coverage. She has a wide knowledge of AI apps, gaming and consumer technology.