Frontier Model Forum – Big Tech promises AI safety
Table of Contents
Tech giants Google, Microsoft, and OpenAI along with start-up Anthropic are forming the Frontier Model Forum. A regulatory body created to ensure oversight on all matters at the forefront of AI, the body promises to ensure safe and responsible development of frontier AI models. With the future of AI, and ultimately public safety, in the hands of big tech leaders, it’s best we learn just what regulations and best practices they would suggest.
What is the Frontier Model Forum?
AI technology is a hot topic at the White House. On Friday 21st July 2023, the founding members of the forum gathered with Amazon, Inflection, and Meta to discuss AI safety with US President Joe Biden.
The matters over which they have jurisdiction are defined as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks”.
Essential AI Tools
Razer DeathStalker V2 Pro Wireless Gaming Keyboard (Renewed)
Samsung Galaxy Watch 4 40mm Smart Watch Bluetooth – Pink Gold (Renewed)
ASUS ROG Rapture WiFi 6E Gaming Router (GT-AXE16000)
Amazon Fire HD 10 tablet
AMD Ryzen 5 5600X 6-core, 12-Thread Unlocked Desktop Processor with Wraith Stealth Cooler
This is not far removed from the six-month ban on models more powerful than OpenAI’s GPT-4, proposed by Landon Klein. Dr Klein, the Director of US policy of the Future of Life Institute, has a reassuring resume. Formerly the Principal Consultant at California’s Committee on Privacy & Consumer Protection, Klein brings a PhD in Neuroscience to his views on public safety. Try “What is ChatGPT – and what is it used for?” or “How to use ChatGPT on mobile” for further reading on ChatGPT.
The European Union is also introducing an “AI act” which “represents the most serious legislative attempt to regulate the technology.” This follows reasonable scepticism about the commitment of AI companies to adhere to their own pledges. After all, the history of big tech and self-regulation paints a concerning picture – and that is exactly where the Frontier Model Forum comes in.
Deals season is here folks, and with it comes huge savings on some of the market's most popular hardware. Below, we be listing today's best PC hardware deals, including GPUs, CPUs, motherboards, gaming PCs, and more.
- ASUS TUF NVIDIA RTX 5080 Was $1599 Now $1349
- ASUS TUF RTX 5070 Ti Was $999 Now $849
- ASUS TUF ROG Strix XG27ACS Was $349 Now $329
- TCL 43S250R Roku TV 2023 Was $279 Now $199
- Thermaltake LCGS Gaming PC Was $1,799 Now $1,599
- Samsung Odyssey G9 (G95C) Was $1,299 Now $1,000
- Alienware AW3423DWF Was $699 Now $549
- Samsung 77-inch OLED S95F Was $4,297 Now $3,497
- ASUS ROG Strix G16 Was $1,499 Now $1,350
*Prices and savings subject to change. Click through to get the current prices.
Why do we need the Frontier Model Forum?
In a public policy statement from Google, it explains that “Governments and industry agree that, while AI offers tremendous promise to benefit the world, appropriate guardrails are required to mitigate risks. Important contributions to these efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (via the Hiroshima AI process), and others.”
The joint initiative will focus on 3 key areas:
- Identifying best practices
“Promote knowledge sharing and best practices among industry, governments, civil society, and academia, with a focus on safety standards and safety practices to mitigate a wide range of potential risks.”
- Advancing AI safety research
“Support the AI safety ecosystem by identifying the most important open research questions on AI safety. The Forum will coordinate research to progress these efforts in areas such as adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors and anomaly detection. There will be a strong focus initially on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models.”
- Facilitating information sharing among companies and governments
“Establish trusted, secure mechanisms for sharing information among companies, governments and relevant stakeholders regarding AI safety and risks. The Forum will follow best practices in responsible disclosure from areas such as cybersecurity.”
The need for a new industry body to evaluate safety concerns before shipping a product is essential to responsible innovation of artificial intelligence. At the direction of the Biden-Harris Administration, these companies have pledged to submit to both internal and external security testing of AI systems, including oversight from independent experts.
The bias of proprietary safety research should be obvious, and so the input of an advisory board who have the technical expertise to understand what each other are doing is a very positive thing for public safety. By participating in these joint initiatives with academics, policymakers, and each other, the Forum's efforts aim to steer AI towards a constructive future of cancer detection, climate change mitigation, and prevention of cyber threats.
What did Google, Microsoft, OpenAI, and Anthropic discuss at the White House?
Each of the representatives of the founding board gave their thoughts.
“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation” asserts Kent Walker, President of Global Affairs for Google. “Engagement by companies, governments, and civil society will be essential to fulfil the promise of AI to benefit everyone.”
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity” adds Brad Smith, Vice Chair & President of Microsoft.
Anna Makanju, Vice President of Global Affairs at OpenAI, the company that sparked it all, had more to say. “Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance. It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible. This is urgent work and this forum is well– positioned to act quickly to advance the state of AI safety.”
What is Anthropic?
Anthropic describe themselves as a public benefit company. Their goal, and the reason they rank alongside the tech giants we all recognise on the founding board, is public safety for AI. Nothing more, nothing less. In further detail, their website reads “Anthropic is an AI safety and research company based in San Francisco. Our interdisciplinary team has experience across ML, physics, policy, and product. Together, we generate research and create reliable, beneficial AI systems.”
“Anthropic believes that AI has the potential to fundamentally change how the world works.” posits CEO Dario Amodei at the White House meeting, “We are excited to collaborate with industry, civil society, government, and academia to promote safe and responsible development of the technology. The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety.”