Frontier Model Forum – Big Tech promises AI safety

With public safety concerns at the forefront of AI research, what is the Frontier Model Forum, and how will it benefit us?

Frontier Model Forum

You can trust PC GuideOur team of experts use a combination of independent consumer research, in-depth testing where appropriate – which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

Last Updated on

Tech giants Google, Microsoft, and OpenAI along with start-up Anthropic are forming the Frontier Model Forum. A regulatory body created to ensure oversight on all matters at the forefront of AI, the body promises to ensure safe and responsible development of frontier AI models. With the future of AI, and ultimately public safety, in the hands of big tech leaders, it’s best we learn just what regulations and best practices they would suggest.

What is the Frontier Model Forum?

AI technology is a hot topic at the White House. On Friday 21st July 2023, the founding members of the forum gathered with Amazon, Inflection, and Meta to discuss AI safety with US President Joe Biden.

The matters over which they have jurisdiction are defined as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks”.

Essential AI Tools

Editor’s pick
Only $0.00019 per word!

Content Guardian – AI Content Checker – One-click, Eight Checks

8 Market leading AI Content Checkers in ONE click. The only 8-in-1 AI content detector platform in the world. We integrate with leading AI content detectors to give unparalleled confidence that your content appear to be written by a human.
EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.


10x Your Content Output With AI. Key features – No duplicate content, full control, in built AI content checker. Free trial available.

Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.


Create SEO-optimized and plagiarism-free content for your blogs, ads, emails, and website 10X faster. Start for free. No credit card required.

This is not far removed from the six-month ban on models more powerful than OpenAI’s GPT-4, proposed by Landon Klein. Dr Klein, the Director of US policy of the Future of Life Institute, has a reassuring resume. Formerly the Principal Consultant at California’s Committee on Privacy & Consumer Protection, Klein brings a PhD in Neuroscience to his views on public safety. Try “What is ChatGPT – and what is it used for?” or “How to use ChatGPT on mobile” for further reading on ChatGPT.

The European Union is also introducing an “AI act” which “represents the most serious legislative attempt to regulate the technology.” This follows reasonable scepticism about the commitment of AI companies to adhere to their own pledges. After all, the history of big tech and self-regulation paints a concerning picture – and that is exactly where the Frontier Model Forum comes in.

Why do we need the Frontier Model Forum?

In a public policy statement from Google, it explains that “Governments and industry agree that, while AI offers tremendous promise to benefit the world, appropriate guardrails are required to mitigate risks. Important contributions to these efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (via the Hiroshima AI process), and others.”

The joint initiative will focus on 3 key areas:

  • Identifying best practices

“Promote knowledge sharing and best practices among industry, governments, civil society, and academia, with a focus on safety standards and safety practices to mitigate a wide range of potential risks.”

  • Advancing AI safety research

“Support the AI safety ecosystem by identifying the most important open research questions on AI safety. The Forum will coordinate research to progress these efforts in areas such as adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors and anomaly detection. There will be a strong focus initially on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models.”

  • Facilitating information sharing among companies and governments

“Establish trusted, secure mechanisms for sharing information among companies, governments and relevant stakeholders regarding AI safety and risks. The Forum will follow best practices in responsible disclosure from areas such as cybersecurity.”

The need for a new industry body to evaluate safety concerns before shipping a product is essential to responsible innovation of artificial intelligence. At the direction of the Biden-Harris Administration, these companies have pledged to submit to both internal and external security testing of AI systems, including oversight from independent experts.

The bias of proprietary safety research should be obvious, and so the input of an advisory board who have the technical expertise to understand what each other are doing is a very positive thing for public safety. By participating in these joint initiatives with academics, policymakers, and each other, the Forum’s efforts aim to steer AI towards a constructive future of cancer detection, climate change mitigation, and prevention of cyber threats.

What did Google, Microsoft, OpenAI, and Anthropic discuss at the White House?

Each of the representatives of the founding board gave their thoughts.

“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation” asserts Kent Walker, President of Global Affairs for Google. “Engagement by companies, governments, and civil society will be essential to fulfil the promise of AI to benefit everyone.”

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity” adds Brad Smith, Vice Chair & President of Microsoft.

Anna Makanju, Vice President of Global Affairs at OpenAI, the company that sparked it all, had more to say. “Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well– positioned to act quickly to advance the state of AI safety.”

What is Anthropic?

Anthropic describe themselves as a public benefit company. Their goal, and the reason they rank alongside the tech giants we all recognise on the founding board, is public safety for AI. Nothing more, nothing less. In further detail, their website reads “Anthropic is an AI safety and research company based in San Francisco. Our interdisciplinary team has experience across ML, physics, policy, and product. Together, we generate research and create reliable, beneficial AI systems.”

“Anthropic believes that AI has the potential to fundamentally change how the world works.” posits CEO Dario Amodei at the White House meeting, “We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology. The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”

Steve is the AI Content Writer for PC Guide, writing about all things artificial intelligence. He currently leads the AI reviews on the website.