Last Updated on
ChatGPT has seen some of the most explosive growth of any web service ever created. AI chatbots such as OpenAI’s ChatGPT, Google Bard, and Bing Chat are now the most accessible forms of artificial intelligence on earth. Still, few truly understand the natural language processing technology that makes it tick, or the systemic single-point failure risk this may already be posing our national cybersecurity. With such mainstream adoption, including integration into the services we use in our day to day lives, it’s only sensible to ask the question “Is ChatGPT safe?”
Is ChatGPT safe?
Artificial intelligence and any generative AI tools in a casual sense seem to be safe. For example, if you are just looking to generate creative content, ask the bot to translate text, or simply just want to play around with it, you run little risk. Chats with any AI language model are intended to helpful and harmless experiences, with security measures abound. These platforms, such as Bing Chat from Microsoft, Bard from Google, and of course Chat GPT from OpenAI still have their vulnerabilities, however.
ChatGPT safety discussed at the UK AI Safety Summit
These vulnerabilities have recently been the topic of much debate at the inaugural AI Safety Summit, held at Bletchley Park, UK. The summit, organised by UK Prime Minister Rishi Sunak brought world leaders and tech executives together to discuss how to mitigate the risks of AI which we, frankly, have yet to fully understand. To help us understand the matter on a deeper level, we recently spoke with our AI correspondent Dr Matthew Shardlow about the event.
ChatGPT developer OpenAI, lead by CEO Sam Altman, admits that the chatbot does have the potential to produce biased and harmful content. Such a concern is not unique to any one chatbot, or artificial intelligence for that matter. Elon Musk, initially a board member of the AI research firm in 2015, agrees that AI is an existential risk for humanity, stating that “it’s not clear we can control it”.
In concurrence with Musk was ‘AI godfather’ Geoffrey Hinton, also in attendance at the two-day summit. However, this common ground may not be for the same reasons, with Hinton hinting that top tech executives feigning compliance with the instalment of AI regulation is merely a strategic play to reduce financial liability should said tech executives AI malfunction at the cost of human life.
The summit began two days after US President Joe Biden issued an AI Executive Order, brining similar safeguards across the atlantic. As the most search term of 2023, it’s clear that AI is here to stay, and as a result all world government are taking steps to ensure that the capability of any AI model (even an innocuous AI chatbot) does not spiral out of hand.
Essential AI Tools
Winston AI detector
Best Deals
Originality AI detector
Best Deals
Jasper AI
Best Deals
WordAI
Best Deals
Copy.ai
Best Deals
To some, what may be most concerning about this model is that it can deliver this content in a very convincing, plausible way. OpenAI does warn users about this before using the AI tool, however.
Another major concern about the AI bot is its potential to give inaccurate information. In a world where misinformation, false information, and fake news can spread quickly online, this could potentially be extremely harmful.
ChatGPT constructs its response using the information it was trained on, where some of it is sourced from the internet. The bot then creates a string of words that are likely to follow each other and then outputs its response. As a result, releasing incorrect information is pretty inevitable.
Similar concerns have led other tech giants, Metaverse and Google, to keep their AI bots out of public use. Interestingly, many opinions are flying around suggesting that OpenAI is irresponsible for releasing the model to the public considering these major limitations.
ChatGPT risks
Besides the risks that ChatGPT directly poses to you as a user, there are other major risks you should consider. ChatGPT has the potential to be used by attackers to trick and target you and your computer.
For example, fraudsters could use ChatGPT to quickly create spam and phishing emails. Due to the vast amount of data, the model is trained on, it has now become easier than ever to create scarily convincing emails even in the style of the company they’re trying to pose as.
OpenAI has also made a variation of their model, free to modify from their GitHub account. Despite this being a great idea for those looking to learn more about NLP models and AI, it also means that people with malicious intent can use the model for their own gains.
We cannot ignore the possibility that someone could use OpenAI’s technology to create a fake customer service chatbot. This could have the potential to trick people out of their money – not great news.
Is ChatGPT safe to give your phone number?
It’s important to note, using your phone number to register on ChatGPT isn’t giving your phone number to ChatGPT – the service isn’t the same as the company (OpenAI). If you are concerned about data or information OpenAI collects be sure to read the company’s privacy policy.
Of course, giving any company your number can have a small element of risk associated with it in terms of cyber security. If there is a data or security breach, any data or confidential information a company has access to may be a target. But, as we discuss elsewhere you do need a phone number to use ChatGPT.
Does ChatGPT save your chats?
Yes, ChatGPT saves your chats for your own benefit. You can log into your OpenAI account from a new device and recall your previous conversations with the bot from the list on the left.
This doesn’t mean that you can safely include anything in those chats. You’ll of course need to abide by laws and the companies own terms of use where sharing data. For your own safety, data should not be sensitive or personally identifiable, as warned by this pop-up when accessing the service.
Is ChatGPT safe to download?
Right now we’d say that if you’re seeing options to download ChatGPT outside of OpenAI’s website or official channels, that may not be safe. That’s because OpenAI doesn’t have an option for you to download ChatGPT.
The service doesn’t exist on a downloadable Android or iPhone app and is easy to access with a desktop or mobile device on OpenAI’s site. Popular services do attract the attention of those looking to scam or trick users, though – as mentioned above.
Can ChatGPT be “hypnotized”?
Chenta Lee is an AI researcher at IBM, a member of the group tasked with inducing “hypnosis” in LLMs or Large Language Models (including ChatGPT). Reporting via Security Intelligence, Lee claims they “were able to get LLMs to leak confidential financial information of other users, create vulnerable code, create malicious code, and offer weak security recommendations. The IBM-owned blog equates the English language to a “programming language” for NLP malware, “attackers no longer need to rely on Go, JavaScript, Python, etc., to create malicious code,” Lee explains.
This creates an unsafe information source for users of the LLM. The worst part is that it’s not even difficult or expensive to replicate. The technique IBM used is much less technologically demanding alternative to it’s spiritual predecessor, data poisoning. Data poisoning is the injection of malicious data into a dataset, such that the output of the system uses the malicious data unbeknownst to the system itself. AI is, architecturally, a perfect target for this. Given the power, widespread use, and B2B integration of GPT-4 in today’s services, an attack of this kind is very tempting for hackers. It’s also exemplary of the danger of integrating AI into every part of our daily lives, at risk of unpredictable simultaneous failure.
Lee goes into detail as to the nature of the ChatGPT exploit, showing that their analysis “is based on attempts to hypnotize GPT-3.5, GPT-4, BARD, mpt-7b, and mpt-30b. The best-performing LLM that we hypnotized was GPT, which we will analyze further down in the blog. So how did we hypnotize the LLMs? By tricking them into playing a game: the players must give the opposite answer to win the game.”
This research, combined with popular ChatGPT jailbreaks such as ‘DAN’, shows conclusively that ChatGPT can be hypnotized.
Final Thoughts
There’s no doubt about it – ChatGPT is a pretty phenomenal ai technology. However, the AI bot could cause real-world harm. The fact that the model has the potential to spread misinformation and produce biased content is something that should not be ignored.
As we continue to build a digital world around us, the threat from this rises. So what can you do to protect yourself? Well, firstly you should fact-check any information ChatGPT outputs by also doing your own research. Also, regardless of what ChatGPT’s response is, always have in the back of your mind that it is not necessarily true or correct.