Meta to build open-source AGI with world’s biggest AI data centre

Table of Contents
Meta has placed what may be the largest order yet for NVIDIA H100 GPUs — the most in-demand AI processors on Earth. A record-breaking 350,000 GPUs are allocated for Meta AI data centers, at a total cost in the billions of dollars. It could take up to a year to fulfill, but when 2025 rolls around Meta will be in a front-running position for the AI race. CEO Mark Zuckerberg’s goal of open-source AGI is now within reach.
Is Meta building the world’s biggest AI data center?
Meta has 350,000 NVIDIA H100 GPUs inbound, probably forming the largest AI data center in the world. To put this into perspective, each unit can cost around $35,000 – $40,000 if bought individually on the secondary market. The total cost to Meta is estimated to be roughly $10.5 billion. This is a ‘reasonable’ wholesale cost, roughly $3 billion less than if bought on the secondary consumer market.
Prime Day is finally here! Find all the biggest tech and PC deals below.
- Sapphire 11348-03-20G Pulse AMD Radeon™ RX 9070 XT Was $779 Now $739
- AMD Ryzen 7 7800X3D 8-Core, 16-Thread Desktop Processor Was $449 Now $341
- ASUS RTX™ 5060 OC Edition Graphics Card Was $379 Now $339
- LG 77-Inch Class OLED evo AI 4K C5 Series Smart TV Was $3,696 Now $2,796
- Intel® Core™ i7-14700K New Gaming Desktop Was $320.99 Now $274
- Lexar 2TB NM1090 w/HeatSink SSD PCIe Gen5x4 NVMe M.2 Was $281.97 Now $214.98
- Apple Watch Series 10 GPS + Cellular 42mm case Smartwatch Was $499.99 Now $379.99
- ASUS ROG Strix G16 (2025) 16" FHD, RTX 5060 gaming laptop Was $1,499.99 Now $1,274.99
- Apple iPad mini (A17 Pro): Apple Intelligence Was $499.99 Now $379.99
*Prices and savings subject to change. Click through to get the current prices.
For the low price of someone’s annual salary, you’ll get an 80GB GPU for PCIe5.0 capable of 26 teraFLOPS (FP64). Pulling the most impressive number, you’ll see a wildly different figure of 3,026 teraFLOPs. This figure is “with sparsity”, and specifically refers to FP8 tensor core performance of the standard PCIe model. The higher bandwidth H100 SXM model increases this figure to an almost unheard-of 3,958 teraFLOPS.
Some services will allow you to ‘rent out’ a H100 or H200, at an optimistic cost of $2/hour+. Given the extreme corporate demand for the processor, this is the only way that smaller businesses will be able to get their hands on one. Enterprise data centers like those of Meta AI will benefit from the NVIDIA AI Enterprise initiative. The social media giant is an NVIDIA HGX H100 Partner with NVIDIA-certified systems.
Essential AI Tools
Meta is creating open-source AGI ‘for all’
Speaking via the Threads platform, CEO Mark Zuckerberg states that “it’s become clearer that the next generation of services requires building full general intelligence”. Here, the Facebook founder is referring to artificial general intelligence (AGI). This is a level of computer intelligence equal to human intelligence. Of course, computers are already much faster at processing data than we humans and can retrieve a far greater variety of information than any person could possibly remember.
This processing speed and information retrieval are not what defines AGI. ‘Full general intelligence’ is something that we have not yet achieved. It is the simulation of intelligence, thinking in ‘the same way’ as a human, with all its inferences and biases. An AGI will be proactive in telling us information that we don’t even know that we don’t know — something that traditional search engines cannot do.
It goes without saying that most businesses cannot afford to compete with a $10.5 billion hardware investment. This may even put them ahead of their leading competition — OpenAI and Microsoft — at least in terms of H100 acceleration.
When will LLaMA 3 be released?
The open-source AI most likely to emerge first from this world-leading hardware advantage could be LLaMA 3. The next generation of Meta AI models is expected to become available at some point this year, but there is no official release date for LLaMA 3 just yet. One thing’s for sure — 350,000 AI accelerators will bring this forward substantially.