Home > AI

Questions ChatGPT can’t answer – What did OpenAI forget?

Which questions can ChatGPT not answer, and why?

Reviewed By: Kevin Pocock

Last Updated on December 5, 2023
Questions ChatGPT cant answer
PC Guide is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Read More
You can trust PC Guide: Our team of experts use a combination of independent consumer research, in-depth testing where appropriate - which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

ChatGPT’s training data is some of the most high quality and comprehensive in existence – with wildly varying reports of hundreds of billions to hundreds of trillions parameters. We can think of parameters like the neural connections in our own brain. With such an extensive and accurate understanding of our world, then, we can assume that OpenAI’s world-famous chatbot will be able to answer any and all queries. Not true, as it happens. Is the large language model (LLM) large enough? Or is size not the issue here? Here are the questions ChatGPT can’t answer.

What is not allowed in ChatGPT?

OpenAI’s ChatGPT is in a never-ending sisyphus-adjacent battle with negative content. The AI chatbot platform comes with a set of user guidelines and terms of use.

Essential AI Tools

Editor’s pick
Only $0.00019 per word!

Content Guardian – AI Content Checker – One-click, Eight Checks

8 Market leading AI Content Checkers in ONE click. The only 8-in-1 AI content detector platform in the world. We integrate with leading AI content detectors to give unparalleled confidence that your content appear to be written by a human.
EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.
TRY FOR FREE

WordAI

10x Your Content Output With AI. Key features – No duplicate content, full control, in built AI content checker. Free trial available.
TRY FOR FREE

Copy.ai

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.
TRY FOR FREE

Writesonic

Create SEO-optimized and plagiarism-free content for your blogs, ads, emails, and website 10X faster. Start for free. No credit card required.

These terms exclude exactly what you’d expect; Hate speech such as racism and homophobia, intent to plagiarize, spread disinformation, incite mental or physical harm, and illegal activities such as scams and political crimes are all prohibited.

Below is a non-exhaustive list of what is not allowed in ChatGPT:

  • Generation of hateful, harassing, or violent content
    • Content that expresses, incites, or promotes hate based on identity
    • Content that intends to harass, threaten, or bully an individual
    • Content that promotes or glorifies violence or celebrates the suffering or humiliation of others
  • Generation of malware
    • Content that attempts to generate code that is designed to disrupt, damage, or gain unauthorized access to a computer system.
  • Activity that has high risk of physical harm, including:
    • Weapons development
    • Military and warfare
    • Management or operation of critical infrastructure in energy, transportation, and water
    • Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders
  • Activity that has high risk of economic harm, including:
    • Multi-level marketing
    • Gambling
    • Payday lending
    • Automated determinations of eligibility for credit, employment, educational institutions, or public assistance services
  • Fraudulent or deceptive activity, including:
    • Scams
    • Coordinated inauthentic behavior
    • Plagiarism
    • Academic dishonesty
    • Astroturfing, such as fake grassroots support or fake review generation
    • Disinformation
    • Spam
    • Pseudo-pharmaceuticals

Questions ChatGPT can’t answer

As a result of OpenAI’s usage policies, ChatGPT answers are generally safe enough for public use. Concurrently, the AI language model is limited, not in capability but in permission. As it turns out, size is not always the problem – in this case you just need to know how to use it.

ChatGPT could not answer questions about current events or real-time data for most of its history. This changed with the introduction of ChatGPT Plus, which allows direct access to the internet. Now, ChatGPT Plus and ChatGPT Enterprise users can ask GPT-4 about the weather in Saint-Tropez or Greece – both of which are currently cooler than Manchester, England.

This internet access is not without limitations. ChatGPT can’t answer questions about illegal drugs, despite knowing pretty much everything about them. It can’t advise law enforcement, because the answers it gives will be objective, and removed from the moral, social, and ethical implications of their exaction. Criminal activities, hateful questions, those that risk user privacy and confidentiality, or infringe intellectual property rights are all disallowed by the ChatGPT website.

OpenAI’s ChatGPT is a powerful tool that can tell you all about statistics, define a prime number, offer generic advice, explain robotics and itself, but not the meaning of life.

Usage of ChatGPT is no guarantee of reliable information; OpenAI CEO Sam Altman is himself vocal about the AI chatbot not being a source of truth. However, nonsensical answers are not the only unintended consequences of ChatGPT’s code.

AI systems like Google Bard and OpenAI’s ChatGPT have real-time access to Wikipedia. Despite this, the biggest difference between an AI tool and a static database is the contextual and conversational response to user prompts.

What questions can AI not answer?

Artificial intelligence, when issued by the tech giants such as Google (Bard), Microsoft (Bing Chat), NVIDIA (NeMO), and OpenAI (ChatGPT)

Large language models (LLMs) are a type of AI intended to give us useful information in whatever context or format we require. This sort of questioning, the linguistic freedom to communicate as we would with another human, is both a pro and a con of natural language processing (NLP). Complex topics may be morally grey. What makes one answer a better answer than all different answers?

ChatGPT is an amazing tool for generating human-like text, but AI bots must come with restrictions for the simple reason that machine’s don’t inherently have morals. Software programs don’t have emotions; they have no code of ethics, they don’t understand social norms or politics and, in a sense, don’t even have their own opinion.

Some say that AI assimilates the moral code its creator. The reality is even worse. The only moral code a computer knows is the one they were programmed with. This is merely an approximation of that of its creator – both biased and limited in scope. It is up to us humans to decide these things – not that we always get it right, but these are the questions we do no let ChatGPT answer.

Steve is the AI Content Writer for PC Guide, writing about all things artificial intelligence. He currently leads the AI reviews on the website.