Home > AI

AI Expert on UK Safety Summit — Here’s how to control AI

An expert point of view on the news, as it unfolds.

Reviewed By: Kevin Pocock

Last Updated on November 2, 2023
PC Guide exclusive interview with Dr Shardlow on the UK AI Safety Summit
You can trust PC Guide: Our team of experts use a combination of independent consumer research, in-depth testing where appropriate - which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

We discuss the UK AI Safety Summit in our exclusive interview with Dr Matthew Shardlow of Manchester Metropolitan University. British Prime Minister Rishi Sunak addresses the nation at Bletchley Park, over a landmark two-day summit which will alter the countries approach to artificial intelligence. The event follows President Bidens AI Executive Order, which will have a similar effect in the US — but just what effect will that be? With government leaders such as US Vice President Kamala Harris, alongside executive leaders like Elon Musk, and OpenAI CEO Sam Altman, the collective verdict continues to unfold.

Who is Dr Matthew Shardlow?

Dr Shardlow is a senior lecturer at Manchester Metropolitan University, in the Department of Computing and Mathematics. An expert in Natural Language Processing (NLP), Shardlow understands the scientific fundamentals of our favorite AI chatbots (like OpenAI’s ChatGPT) better than most! He earned his PhD at the University of Manchester in 2015, with a thesis on lexical simplification.

In My PhD, I focussed on the topic of lexical simplification and published several academic articles, as well as my thesis. Following on from my PhD, I worked as part of an EC H2020 project called “An Open Mining Infrastructure for Text and Data (OpenMinTeD)” at the National Centre for Text Mining. In this role I helped develop a text mining platform that is available for use by non-expert users.

Dr Matthew Shardlow

Our interview on the UK AI Safety Summit

1 — Do you think that AI, particularly AGI, can be made safe?

As with any technology, AI/AGI has the potential for great good and great harm. AI can be misused by those who misunderstand or misapply it. Better education about the limitations of AI can help to combat its misuse. When AI is anthropomorphised, it is given agency beyond its true capacity. Users expect AI to be able to perform tasks that it is unsuitable for, failing to check the results and reaping the consequences of its mistakes.

Military use of artificial intelligence

AI may also be weaponised through integration into autonomous military technology. Weaponised AI sounds like the stuff of science fantasy, yet AI is already being integrated into the latest military tech (such as the RAFs new Tempest jet) and deployed in battlefield scenarios. The UK government’s defence artificial intelligence strategy states that “AI has enormous potential to enhance capability, but it is all too often spoken about as a potential threat. AI-enabled systems do indeed pose a threat to our security, in the hands of our adversaries, and it is imperative that we do not cede them a vital advantage.”

Militaries around the world are already locked into an arms race to integrate and deploy AI technology, or get left behind. It is only through international accords such as the 1980 UN Convention on Certain Conventional Weapons (CCW) that the pace of military adoption of AI technologies may be abated.

There are many strategies for improving the safety of modern AI technologies. Reinforcement learning is used in the deployment of AI chatbots such as OpenAI’s ChatGPT and Google’s Bard to prioritize responses that avoid harmful and contentious topics. This same learning algorithm can be applied to many forms of AI, whether chatbots with natural language interfaces or data-driven models that serve predictions in decision making scenarios. Watermarking can also be used to improve the safety of AI generated content such as text and images. Watermarking AI content involves imperceptible deviations to the generation process which leave the output unchanged to the eyes of the user, yet allow near-perfect algorithmic detection of AI vs. human creation.

AI can be created in a safe and responsible manner, but its continued safety depends on the humans that use it.

UK Prime Minister Rishi Sunak holds AI Safety Summit at Bletchley Park
UK AI Safety Summit, Bletchley Park, 1st – 2nd November

2 — How do you predict the UK and US will diverge culturally or technologically (if at all) based on President Biden’s executive order?

The current flow of information from the US and the expected results of the AI safety summit seem to be in line with one another. Particularly, both prioritise AI safety and strategies to mitigate potential harms from AI. A big advance which has been pushed for on both sides of the Atlantic is the public sharing of safety policies and testing results from AI companies. Making this information public will help to shed light on the AI safety landscape, allowing regulatory frameworks to be built which take this into account.

Common regulatory practices across centres of AI development (nominally the US, UK and EU) are important to ensure that AI developers don’t just move to whichever jurisdiction best suits their corporate interests. Regulating the creation and use of AI is vitally important to ensure the mitigation of potential harms through its misuse.

3 — The main theme from the AI Safety Summit so far appears to be concerns about ‘loss of control’ of AI. What do you believe to be the best way to control an AI, while still allowing it to be useful?

It’s important to remember that a large part of what is being discussed is the idea of ‘frontier AI’, a seemingly new term (at least to me), describing the medium-term future of what might be possible with reasonable developments of the technology that we have now.

The current iteration of the technology is seemingly sophisticated, but it’s easy to find cases where it can be fooled, misrepresented, or led to generate false or toxic information. This is just an artefact of the design principles behind these types of generative models. They are very good at producing something that looks realistic, but isn’t grounded in reality. There are massive teams of people working at companies like OpenAI, Google, Microsoft, Claude, etc. whose sole responsibility is to find vulnerabilities in their models and provide examples of how the model should avoid these for future training rounds. A common complaint against these models is that they can be overly cautious – that’s because they are trained again and again to behave in a cautious manner.

It’s difficult to say where this is going. Certainly the current models have fundamental flaws that mean that they are prone to misinformation. For example, whilst generating a new text, in order to give the appearance of selection, there is some deliberate random effect introduced into the decoding algorithm that is used. Effectively, the algorithm flips a coin to decide what the next word will be. There is no factuality, or intent, just the roll of a dice determining what the model will generate next. Additionally, the current iteration of models do not exhibit consciousness. At best they represent some static instance of learning, that can be probed repeatedly without learning or adapting. If these models are deployed in a continual learning setting, they quickly degrade in performance.

Essential AI Tools

Editor’s pick
Only $0.00019 per word!

Content Guardian – AI Content Checker – One-click, Eight Checks

8 Market leading AI Content Checkers in ONE click. The only 8-in-1 AI content detector platform in the world. We integrate with leading AI content detectors to give unparalleled confidence that your content appear to be written by a human.
EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.
TRY FOR FREE

WordAI

10x Your Content Output With AI. Key features – No duplicate content, full control, in built AI content checker. Free trial available.
TRY FOR FREE

Copy.ai

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.
TRY FOR FREE

Writesonic

Create SEO-optimized and plagiarism-free content for your blogs, ads, emails, and website 10X faster. Start for free. No credit card required.

Frontier AI and Deanthropomorphising NLP

Considering ‘Frontier AI’, there needs to be some massive breakthroughs in the way that these algorithms work at a fundamental level. This is beyond larger model-parameter sizes or bigger training data sets. These models would need to overcome barriers such as factuality/hallucination and the ability to learn from new experiences. In this setting, we might consider what it looks like to control such a Frontier AI model.

There are already well established ways of controlling the outputs of AI models. This can be done through learning-based approaches (i.e. using reinforcement learning to promote responses that are considered correct, whilst deprioritising toxic or harmful information). There are also simple rule-based approaches, such as keyword flagging the output of a model to ensure it isn’t using profane language or discussing topics outside of its given remit.

Is it possible to ‘lose control’ of AI? — Dr Shardlows final thoughts

A danger in considering the idea that we might ‘lose control of an AI’ is that we are assuming that the AI is deployed in some context where it is not interacting with a human. Currently, the generative AI chatbots or image creators that we are seeing are only capable of responding to the stimuli provided through prompting. This makes it relatively easy to control the AI, as all responses are directly intended for some human to read and evaluate. It is conceivable that some future AI would interact with other stimuli (e.g., responding to some sensor input and responding with commands to operate machinery or generating some information based on this), but even in these scenarios providing a human layer of oversight is trivial and highly necessary.

Loss of control of AI is only possible in a scenario where people have deployed it dangerously, i.e. without human oversight and without thought as to how it may be used. We must act responsibly in deploying AI technology and educate those who interact with it.

Steve is the AI Content Writer for PC Guide, writing about all things artificial intelligence. He currently leads the AI reviews on the website.