Home > PC and Tech Deals

OpenAI says its AI detection tool has close to 99.9% accuracy but company reluctant to roll it out

OpenAI hesitating
Last Updated on August 5, 2024
Who created ChatGPT?
PC Guide is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Read More

Table of Contents

You can trust PC Guide: Our team of experts use a combination of independent consumer research, in-depth testing where appropriate - which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

OpenAI has found itself at a crossroads. While the company has developed a highly accurate tool to detect AI-generated content creation, it has been hesitant to release it to the public. The reasons for this caution are varied and reveal a complex layer of technological, ethical, and business considerations.

The surge in popularity of ChatGPT has been accompanied by growing concerns about its misuse. Students are employing the AI language tool to write essays, while content creators are facing challenges in distinguishing original work from AI-generated pieces. To address these issues, OpenAI has been working on a watermarking and detection system. However, despite the urgency of the problem, the company has opted for a deliberate and cautious approach to its rollout.

Who created ChatGPT?
OpenAI logo. Photography: Steve Hook, PC Guide

Primary concerns

One of the primary concerns is the potential for deception. OpenAI acknowledges that while its watermarking technique is effective against localized tampering, such as paraphrasing, it may be less robust against more sophisticated methods like rewriting. This raises the possibility of users finding ways to bypass the detection system and undermining its effectiveness. Additionally, there is a fear that the tool could inadvertently stigmatize non-native English speakers, as it might falsely flag their writing as AI-generated.

Beyond the technical challenges, OpenAI is also grappling with the broader implications of its detection tool. There is a recognition that releasing such a tool could have a big impact on the AI community as a whole. For example, it could lead to a decline in app usage, as users may be put off by a watermarking system. Additionally, it could put themselves at a disadvantage if other AI developers do not adopt similar measures.

OpenAI’s decision to withhold its AI detection tool is a complex one. While the need for such a tool is evident, the potential consequences of its release are far-reaching. As OpenAI continues to refine this new tech and weigh the potential impacts, the question of when, or even if, the detection tool will be released remains unclear for now.

Tom is a tech writer with a detailed view on ensuring the best buying advice, most useful information, and latest news makes its way into PC Guide's articles.