Google Gemini vs GPT-4 (ChatGPT 4)
Table of Contents
One of the big tech giants Google has released the Gemini AI model, its long-awaited answer to OpenAI’s GPT-4 LLM (Large language model). Gemini will now power Google Bard, taking over from another of Google’s proprietary AI models, PaLM 2. Could the capabilities of Openai’s ChatGPT-4 rival surpass the competition? We look at the various versions of the new multimodal AI model – Gemini Nano, Gemini Ultra, and Gemini Pro model – and how their benchmarks place Google Gemini vs GPT-4.
Is Gemini better than GPT-4?
According to Google, Gemini “represents a significant leap forward in how AI capabilities can help improve our daily lives”.
The new AI technology model also represents a significant leap in performance from previous models, as demonstrated by the benchmark results already released at launch. Gemini is off to an embarrassing start, with many details of the launch event revealed to be pre-recorded, and not real-time demonstrations as initially claimed. Despite this, the objective power of the Gemini model may win out against the marketing mishap.

The first of these benchmark tests is called MMLU, short for Massive Multitask Language Understanding. This text benchmarks a model as it performs 57 multi-tasking-based trials “including elementary mathematics, US history, computer science, law, and more”. One of several authors of the test, Dan Hendrycks, notes an impressive gap of 20 percentage points above random chance scored by OpenAI’s GPT-3 model. Hendrycks does make the caveat that GPT-3 needed “substantial improvements before [it] can reach expert-level accuracy”.
However, as the research paper was last revised on January 21st, 2021, the model mentioned is no longer the SOTA (State-of-the-Art). GPT-4 and its new GPT-4 Turbo variant will far outperform even that. More recent testing shows that GPT-4, the foundation model from OpenAI, scored 86.4% with a 5-shot attempt.
Deals season is here folks, and Amazon has already kickstarted its early Black Friday deals! We'll be covering all the best deals in more details over in our deals hub, but if you haven't got time to read through those, why not see our top picks below.
- ASUS TUF NVIDIA RTX 5080 Was $1599 Now $1199
- ASUS TUF RTX 5070 Ti Was $999 Now $849
- Samsung Odyssey OLED G6 Was $899 Now $649
- TCL 43S250R Roku TV 2023 Was $279 Now $199
- iBUYPOWER Y40 Gaming PC Was $2,299 Now $1,819
- Samsung Odyssey G9 (G95C) Was $1,299 Now $777
- Alienware Area-51 gaming laptop Was $3,499 Now $2,799
- Samsung 77-inch OLED S95F Was $4,297 Now $3,497
- ASUS ROG Strix G16 Was $1,499 Now $1,199
*Prices and savings subject to change. Click through to get the current prices.
By contrast, Gemini Ultra (pixel only) exceeds expert-level accuracy, able to score 90% on the MMLU benchmark, compared to 89.8% from a human expert. This is significant because “Gemini is the first model to outperform human experts on MMLU (Massive Multitask Language Understanding), one of the most popular methods to test the knowledge and problem-solving abilities of AI-generated models.” OpenAI aims to offer ethical AI development while focusing on reducing biases, promoting transparency, and ensuring safety in AI models.
Essential AI Tools
Razer DeathStalker V2 Pro Wireless Gaming Keyboard (Renewed)
Samsung Galaxy Watch 4 40mm Smart Watch Bluetooth – Pink Gold (Renewed)
ASUS ROG Rapture WiFi 6E Gaming Router (GT-AXE16000)
Amazon Fire HD 10 tablet
AMD Ryzen 5 5600X 6-core, 12-Thread Unlocked Desktop Processor with Wraith Stealth Cooler
Google Gemini vs GPT-4 benchmark results
| Benchmark | Gemini | GPT-4 |
|---|---|---|
| MMLU | 90% | 86.4% |
| Big-Bench Hard | 83.6% | 83.1% |
| DROP | 82.4 | 80.9 |
| HellaSwag | 87.8% | 95.3% |
| GSM8K | 94.4% | 92.0% |
| MATH | 53.2% | 52.9% |
| HumanEval | 74.4% | 67.0% |
| Natural2Code | 74.9% | 73.9% |
As we can see from the benchmarked results, GPT-4 is better than Gemini in only one test. Google’s Gemini is better than GPT-4 in all other text-based evaluations — 7 in total!
Comparing Gemini with OpenAI’s multimodal variant, GPT-4V, extends the list of wins quite substantially. Google DeepMind’s latest artificial intelligence scores above the competition in complex tasks relating to audio and vision capabilities.
Is Gemini better than GPT-4 at coding?
Yes, Gemini beats GPT-4 at writing code. Based on benchmark data, Gemini has a very significant 7.4% lead in in HumanEval, which tests Python code generation.
Does Google Bard use Gemini?
Yes, the Gemini model is already integrated into Google Bard. It replaces PaLM 2, the predecessor model also developed by Google. Both are capable multimodal foundation models, but Gemini is the first model to beat human experts at the MML benchmark, and thereby pose a serious threat to GPT-4. With this new AI model, Google Bard is more powerful than ChatGPT at various kinds of tasks. How that will reflect in traffic and attention garnered by these two rival technologies is a different question, as OpenAI has already carved a substantial lead in AI chatbot market share.