Home > AI

NVIDIA unveils GH200 ‘Grace Hopper’ AI superchip

The GH200 ‘Grace Hopper’ AI superchip is set to power the AI servers of tomorrow. What is it, how powerful is it, and what’s next?

Reviewed By: Kevin Pocock

Last Updated on September 19, 2023
GH200 'Grace Hopper' AI super-chip
PC Guide is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Read More
You can trust PC Guide: Our team of experts use a combination of independent consumer research, in-depth testing where appropriate - which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

Generative AI takes a tonne of compute power, and no one is benefitting more from that right now than NVIDIA. The GPU manufacturer has just unveiled its latest high-speed chip, the GH200. Designed with generative AI workloads in mind, the NVIDIA GH200 ‘Grace Hopper’ AI superchip will power the artificial intelligence servers of tomorrow – So what’s so special about it? NVIDIA CEO Jensen Huang explains.

What is the NVIDIA GH200 superchip?

Processor demand has skyrocketed with the advent of accessible AI, and NVIDIA are the technological front line. While thousands of entrepreneurial early-adopters are building software businesses from LLMs (Large Language Models) like Bing Chat, Bard, and of course OpenAI’s ChatGPT, someone has to design and manufacture the hardware it all runs on.

Essential AI Tools

Editor’s pick
Only $0.00019 per word!

Content Guardian – AI Content Checker – One-click, Eight Checks

8 Market leading AI Content Checkers in ONE click. The only 8-in-1 AI content detector platform in the world. We integrate with leading AI content detectors to give unparalleled confidence that your content appear to be written by a human.
EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.


10x Your Content Output With AI. Key features – No duplicate content, full control, in built AI content checker. Free trial available.


Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.


Create SEO-optimized and plagiarism-free content for your blogs, ads, emails, and website 10X faster. Start for free. No credit card required.

“To meet surging demand for generative AI, data centers require accelerated computing platforms with specialized needs,” explains founder and CEO of NVIDIA, Jensen Huang. “The new GH200 Grace Hopper Superchip platform delivers this with exceptional memory technology and bandwidth to improve throughput, the ability to connect GPUs to aggregate performance without compromise, and a server design that can be easily deployed across the entire data center.”

Advancements of the NVIDIA GH200 Grace Hopper platform

To quote the press release, “The new platform uses the Grace Hopper Superchip, which can be connected with additional Superchips by NVIDIA NVLink™, allowing them to work together to deploy the giant models used for generative AI. This high-speed, coherent technology gives the GPU full access to the CPU memory, providing a combined 1.2TB of fast memory when in dual configuration.” Additional advances in AI infrastructure include “HBM3e memory, which is 50% faster than current HBM3, delivers a total of 10TB/sec of combined bandwidth, allowing the new platform to run models 3.5x larger than the previous version, while improving performance with 3x faster memory bandwidth.”

The Grace Hopper superchip is the worlds first HBM3e (High Bandwidth Memory 3 extended) is capable of 5TB/s speeds – for those unfamiliar with data speeds, this is hilariously fast. This is ‘beating a Bugatti Veyron in a drag race by over 100mph’ fast.

The superior 3.5x memory capacity and 3x bandwidth of comparable GPU’s puts the system manufacturer ahead in high performance computing. TDP is of course a significant concern considering the power consumption at the frontier of AI. Thankfully, the arm-based “NVIDIA Grace GPU“ provides high-performance with impressive efficiency – eight petaflops of AI performance in a single server.

In addition to this raw power, the modularity of the system allows many AI superchips to work together for compute-intensive tasks.

Built on existing existing infrastructure, NVIDIA NVLink switch system and NVIDIA MGX, the firm is extremely well positioned to bear the majority of the market share in AI processing hardware.

Who was Grace Hopper?

Grace Hopper, from whom the processor takes its honorary namesake, was an extremely influential American computer programmer. Not many computer scientists rise to her level of notoriety, so what is Grace Hopper best known for?

Among several other inventions, Hopper pioneered machine-independent programming languages with FLOW-MATIC – an early iteration of COBOL. The latter is still used in some industries today.

In addition to her pioneering programming, she was rear admiral of the United States Navy in the 1980’s. As a result of her involvement in the military, to which she was initially denied acceptance, Hopper helped develop one of the worlds first general-purpose computers for use in World War II – the Harvard Mark I. An extremely impressive career, to say the least.

Steve is an AI Content Writer for PC Guide, writing about all things artificial intelligence. He currently leads the AI reviews on the website.