Home > News

Nvidia CEO Jensen Huang claimed “Moore’s Law is dead” years ago, and now he’s proving it with new AI chips

Huang's Law is also a thing, but that's all about GPUs
Last Updated on January 10, 2025
PC Guide is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Read More
You can trust PC Guide: Our team of experts use a combination of independent consumer research, in-depth testing where appropriate - which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

It’s no secret that Nvidia has profited massively off the AI boom. CEO Jensen Huang’s net worth gains this year are another reminder of that. If you’re a PC gamer, you may just know them for their graphics cards – speaking of, the RTX 50 series was just launched – but it’s Nvidia’s AI chips that have helped the company become a powerhouse on the global stage.

And now, a few years after claiming “Moore’s Law is dead” in a Q&A session, Huang reveals that the company’s AI chips are improving faster than the law states. For reference, Moore’s Law is defined by Intel as “the observation that the number of transistors on an integrated circuit will double every two years with minimal rise in cost” – it was originally proposed by Gordon Moore, the co-founder of Intel, and has panned out for the most part.

Nvidia’s AI chips are improving faster than Moore’s Law

In a new interview with TechCrunch earlier this week, the Nvidia CEO had a few things regarding the state of Nvidia’s AI chips:

“Our systems are progressing way faster than Moore’s Law”

“We can build the architecture, the chip, the system, the libraries, and the algorithms all at the same time. If you do that, then you can move faster than Moore’s Law, because you can innovate across the entire stack”

Jensen Huang, Nvidia CEO and co-founder

This is not to discredit how important Moore’s Law has been for computing. Huang himself is quoted as saying that “Moore’s Law was so important in the history of computing because it drove down computing costs” but the advent of AI, and the rapid growth it has allowed for, has simply allowed technology to progress faster than ever predicted before. He also predicts that “the same thing is going to happen with inference where we drive up the performance, and as a result, the cost of inference is going to be less” – inference refers to the idea of a trained AI model making predictions by analyzing new data.

Right now, Nvidia’s new Blackwell architecture is a driving force for both its AI chips and GeForce graphics cards. Huang recently confirmed that “production is progressing smoothly” for Blackwell AI chips and is on track for a 30x performance boost during its inference process. This has now been shown off in the form of the new GB200 NVL72 data center chip, which Jensen paraded on stage at CES 2025; the chip utilizes the new Blackwell architecture.

Perhaps Nvidia’s biggest success to date is the H100 chip (based on Hopper architecture, the predecessor to Blackwell), which tech companies rushed to stock up on for training AI models. However, the latest generation is evidently a massive improvement over the last, in a time where inference is being highly prioritized. According to TechCrunch, Huang claimed that Nvidia’s AI chips today are a thousand times better than what it made 10 years ago, well ahead of what Moore’s Law could ever predict.

At PC Guide, Jack is mostly responsible for reporting on hardware deals. He also specializes in monitors, TVs, and headsets and can be found putting his findings together in a review or best-of guide.