British computer scientist Geoffrey Hinton, who left Google in May this year amid related worries, warns of the profound risks of AI systems. The Professor Emeritus of Computer Science won the Turing Award for his contributions to the field. In a move that is being questionably compared to Oppenheimer renouncing the technology he created, Hinton now speaks on behalf of safety and regulation in the booming AI revolution.
Who is Geoffrey Hinton?
Geoffrey Hinton is known as the Godfather of AI, having won the Turing Award in 2018. The award is sometimes referred to as the “Nobel Prize of Computing”, holding similar esteem in its own field, and was jointly awarded to Yoshua Bengio and Yann LeCun for their combined efforts in deep learning.
After earning his PhD in Artificial Intelligence at the University of Edinburgh, 1978, Hinton went on to teach at various universities before founding the Gatsby Charitable Foundation Computational Neuroscience Unit at University College London. After this, he began his ten-year tenure at Google in March 2013. Google Bard was, naturally, one of the many results.
Essential AI Tools
The professor Emeritus at the University of Toronto has had multiple notable students who, themselves, went on to make a mark on the AI landscape.
Ilya Sutskever PhD (Hebrew: איליה סוצקבר)
Sutskever is co-founder and Chief Scientist at OpenAI, the firm behind ChatGPT. In addition to learning from Andrew Ng (former Chief Scientist at Baidu and co-founder of Google Brain, where Sutskever also worked from 2013 to 2015), he is a co-inventor of convolutional neural network (CNN) AlexNet, and Fellow of the Royal Society (FRS). Sutskever was one of six doctoral students of Geoffrey Hinton.
Yann André LeCun PhD
LeCun is Vice President & Chief AI Scientist at Meta, where he has worked since 2013. The Turing Award-winning computer scientist was a post-doc student of Geoffrey Hinton, and has since become a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI). This in addition to co-authoring image compression technology as well as programming language Lush. LeCun also worked in the Adaptive Systems Research Department at Bell Labs, one of the 20th centuries most significant companies in AI, from 1988 to 1996.
What is Geoffrey Hinton’s warning? – A responsible approach to AI
In an interview with “60 minutes” host Scott Pelley, Hinton was asked whether or not humanity knows “what it’s doing?” To this he replied, simply, “No.”
This sentiment is shared by over 27,000 big tech leaders who, to date, have collectively signed an open letter to put a halt to research and development of AI “more powerful than GPT-4“. The letter specifically called for a pause of 6 months on the basis of a “profound risks to society and humanity.” Such a pause has, predictably, not happened.
“If you’re going to live in a capitalist system, you can’t stop Google [from] competing with Microsoft,” explains Hinton, adding that he doesn’t resent Google, his former employer, or the AI research he conducted there. “It’s just inevitable in the capitalist system or a system with competition between countries like the U.S.”
Hinton’s views are clearly multifaceted. While the tenured scientist advised we stay cautious of the “period of great uncertainty” ahead, he also believes that the dangers of AI chatbots and such are outweighed by potential good. More so, the cognitive psychologist believes his life’s work to be inevitable. “If I hadn’t done it, somebody else would have,” he reconciles.
I think people need to understand that deep learning is making a lot of things, behind-the-scenes, much better.
Geoffrey Hinton
Hinton has publicly speculated that AI will soon replace jobs like paralegals, personal assistants, among other “drudge work”, with more professional sure to come next. The huge benefits of AI technology will come at the cost of jobs for many individuals. That shift, it seems, has already begun.
Google’s chief scientist, Jeff Dean, voiced Googles appreciation of Hinton’s contributions to the company over the past decade.
“I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well!” reminisces Dean
Final thoughts
This ‘enormous uncertainty’ is almost without precedent. It could be compared to the industrial revolution, and how that impacted mechanics, production at scale, and the economy as a whole. Even more similarly, we might compare it to the “dot com” boom and how that affected communication, information technology, and digital distribution. I’d hesitate to compare it to how cavemen felt discovering fire, however – let’s not go crazy now.
In all seriousness, AI has a peculiar trait that the others did not. The relentlessly exponential rate of self-improvement.
Where these other periods in world history increased the rate of productivity in related fields, AI is doing so to itself. The rate of improvement in AI research and development can be accelerated, at an increasing rate, by AI.
Not only can AI learn independently, and learn how to learn at that, it can program computer code. It can therefore program other AI systems, conducting research about itself; It’s not conceptually impossible that an AI makes another AI more intelligent than itself.
A system that can self-improve is not to be taken lightly. Never before have we had a technology so fundamentally self-accelerating. Scientists already observe unexpected behavior in AI technology, known as “emergent behaviors”. It’s clear that we, as a people, cannot rely on technology leaders to do the right thing – at least not when left unregulated. The vast amounts of data being fed into the learning algorithm of each of big tech’s AI chatbots is reason enough for regulation. But even more than that, how this technology stands to impact our world, even our understanding of self-awareness and the human mind, is one of the most pertinent subjects that any of us have been alive long enough to concern ourselves with.