In the “largest study of its kind”, the predictions of 2,778 researchers have been aggregated to form an educated view of how artificial intelligence will shape our world over the coming years. One such prediction is that some AI systems will be able to fine-tune other AI systems and by extension themselves. Could AI systems self-improve by 2028? The consequences are likely to be felt from our financial institutions to the music industry, with “at least a 50% chance” that AI will create songs “indistinguishable” from any popular musician.
AI likely to self-improve, create ‘indistinguishable songs’ by 2028, researchers say
There are many interesting insights to glean from this study. The abstract alone reveals that “aggregate forecasts” of the 2,778 respondents “give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a large language model.”
The early symptoms of this technological breakthrough, such as the fidelity of platforms such as ElevenLabs and Google’s MusicLM, have already caused problems for major music labels such as Universal Music Group (UMG). These sentiments are echoed elsewhere in the corporate sphere, by a self-reported 49% of CEOs who believe that AI could and should replace them.
If these predictions are accurate, then you’ll be able to create your own Taylor Swift songs with generative AI in seconds, using only a computer. Of course, it’s already possible to create music in your bedroom without a microphone or instrument, as electronic music producers do with software such as Ableton Live, Pro Tools, and FL Studio. The key difference here is the ability to produce new vocals from the artist of your choice, even new words that the artist has never used in any of their songs. We’ve already seen this happen in the case of Canadian rapper Drake, with the most deepfaked music catalog of 2023. Does this come with ethical and moral implications? Absolutely. The legal implications, however, are still being worked out today.
What can we predict about AI by the end of the decade?
Dr. Matthew Shardlow, a member of the UK’s Centre for Advanced Computational Sciences, gave his thoughts in a discussion with PC Guide. Not a participant in the study himself, he advises that these numbers are not fact by any means. According to Dr. Shardlow, we should all be reserved in “how much stock [we] place in the predictions of AI researchers.”
2,778 researchers predict “at least a 50% chance” that AI systems self-improve by 2028
Self-improvement, and claims of probable autonomy by 2028, caught my attention above all else. The ability of an AI system to download data to your hard drive is dangerous enough, and while my intention isn’t to “fear-monger”, there is an obvious conclusion. AI has at least 50% chance of autonomous self-improvement by 2028, according to “2,778 researchers who had published in top-tier artificial intelligence” publications. AI models are, after all, data.
When an AI system can download model weights and training data, and is sufficiently fluent in Python — the most popular programming language used to create these AI systems — it could fine-tune ‘itself’. In this way, we may see AI systems self-improve by 2028. It would be improving a different instance of its code, of course, and as such you’d have LLM1.0 fine-tuning LLM1.1, which fine-tunes LLM1.2, and so on.
Essential AI Tools
Can AI self-replicate and self-improve?
So the question becomes ‘Can a neural network, limited by its own biases, hallucinations, and training data limitations, produce a higher-quality neural network?’ In other words, can an AI system identify a higher-quality data set than the one it was trained on? At this point, we discover in practice whether or not an intelligence can create an intelligence more intelligent than itself. This is essentially what AI companies — such as OpenAI, Anthropic, Google, and Microsoft — are currently attempting.
We can replace the words “neural network” with “human brain” at any point in the previous paragraph. After all, the human brain is a neural network — a biological one, as opposed to an artificial one. Never before has the business model of a corporation been so fundamentally reliant on solving such existential problems.
Which came first — The chicken or the egg?
If you prescribe to the theory of evolution — the idea that living things adapt to environmental pressures over multiple generations — the answer is the egg. Allow me to explain.
A chicken can only come from a chicken egg. By the time a chicken embryo is in the egg, what emerges is predetermined. In this way, the chicken egg axiomatically comes before the chicken (although, if we’re picky, they’re formed at the same time).
Before the first generation of what we define as ‘chickens’ came the last generation of what we don’t define as ‘chickens’ — in the same way that homo sapiens were once homo erectus. But how does this relate to AI?
The LLM1.1 that comes from LLM1.0 will not be, objectively and in its entirety, an improvement. If we’re discussing autonomous self-improvement, meaning zero human supervision (unsupervised learning and iteration), then hallucinations are sure to creep in. This is evident in biological iteration (evolution) where mutations can occur to the genome between generations. The probability is low per gene, approximately “10−4 to 10−6 per gene per generation”, but extrapolate this over thousands of iterations and the millions of instances they create; The resulting capabilities and intentions of an AI system at the end of that chain could be as different from its ancestor as we are from Chimpanzees.
Conclusion
It’s a brave new world, with most industries set to change significantly in the coming years. Even the predictions themselves have changed significantly, bringing forward the date of these milestones by 13 years compared to predictions from 2022. It’s important to remember that AI is hard to predict, with its habit of self-improving faster than other technologies. These predictions may be well-informed, but should still be taken “with a pinch of salt”, so to speak.
“Ultimately,” reminds Dr Shardlow, “they are only predictions – not clairvoyance. You could survey 3,000 weather forecasters about when to hold your wedding day, but you still might get rained on.”