Could AI make Judges ‘obsolete’? US Chief Justice urges caution

Would you trust AI to be your lawyer?

US chief justice John Roberts warns of AI automation in court.

You can trust PC GuideOur team of experts use a combination of independent consumer research, in-depth testing where appropriate – which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

Last Updated on

AI (Artificial Intelligence) is already having a noticeable effect on most industries. Lawyers are concerned that the courts of law could well be next, calling the ethics of the automation of justice into question. While big tech is excited about new scientific breakthroughs in machine learning, Judicial staff, including US Chief Justice John Roberts, do not share equal confidence in the future of artificial intelligence, maintaining that human judges are essential to the application of human judgment. While these technological changes have great potential in other areas, the EU AI Act outlines that the federal court system must be cautious in the use of AI. Surely then, AI can’t make our human judges obsolete?

The EU AI Act

Last year saw several impactful events that brought AI safety concerns to light and offered solutions, both legal and social, to tackle them. Between the AI Executive Order issued by US President Joe Biden and the AI Safety Summit held by UK Prime Minister Rishi Sunak just two days later, world governments are well aware of artificial intelligence and the safety concerns thereof.

In addition, last year saw the formation of the Frontier Models Forum, a governing body formed of leading firms in AI research such as Google, Microsoft, Anthropic, and OpenAI. Together, these tech giants are keeping a watchful eye on the progression of the AI industry, providing the highly scientific guidance required for safe and sustainable development, where government officials are ill-equipped to advise in the same capacity.

In December of 2023, the European Commission joined this global initiative, outlining safety measures and transparency requirements for EU businesses relating to artificial intelligence. This agreement is called the EU AI Act. The new agreement builds on five years of proposals, including The European Strategy on AI (first published in 2018), and the Guidelines for Trustworthy AI (2019), as well as the Assessment List for Trustworthy AI, published in 2020 by the High-Level Expert Group on Artificial Intelligence (HLEG).

Essential AI Tools

Editor’s pick
Only $0.00019 per word!

Content Guardian – AI Content Checker – One-click, Eight Checks

8 Market leading AI Content Checkers in ONE click. The only 8-in-1 AI content detector platform in the world. We integrate with leading AI content detectors to give unparalleled confidence that your content appear to be written by a human.
EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.


10x Your Content Output With AI. Key features – No duplicate content, full control, in built AI content checker. Free trial available.

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.


Create SEO-optimized and plagiarism-free content for your blogs, ads, emails, and website 10X faster. Start for free. No credit card required.

US Chief Justice John Roberts warns of ‘significant’ AI impact on law

The EU AI Act in question proposes a rating system, to classify business activities from least to most potentially harmful. At the lowest end of the scale, we have minimal risk. These are AI systems that pose little to no risk to citizen’s rights or safety. In some cases, these AI systems can increase safety, as in the case of spam filters which detect malicious emails that include viruses, or harmful language filters that block hate speech in online video games.

Some business activities are harder to classify, such as self-driving cars. We, emotionally, want to believe that we humans are better drivers than a machine (which was ultimately designed by a human anyway). You and I would like to believe that, even in the face of objective studies showing that the rate of both fatal and non-fatal crashes is lower in self-driving cars than in their manual counterpart.

Self-driving cars are sure to result in a number of fatalities, in the same way that human-driven cars already do. However, we are societally at peace with the risk of death when weighed against the convenience of fast personal transport. It’s not a comfortable truth, but it does mean that self-driving cars don’t need to be death-free in order to be considered a positive change in terms of safety — they only need to be safer than what we already use.

How the EU AI Act protects against the automation of justice in a court of law

The upper end of the scale, designated “Unacceptable Risk” decrees that “anything considered a clear threat to EU citizens will be banned.” This would of course include AI-powered weaponry, but short of that we have the second highest risk designation, which relates the EU AI Act to the legal system itself.

High-risk AI includes that relating to critical public infrastructure, education, safety products, employment, law enforcement, and the administration of justice. These sectors will receive continual human oversight, and be subject to the strictest vetting process before AI can be used, if at all.

Thankfully for Chief Justice John Roberts, it looks as though the legal system will maintain its human touch for some time to come.

Steve is the AI Content Writer for PC Guide, writing about all things artificial intelligence. He currently leads the AI reviews on the website.