Artificial intelligence pioneer Geoffrey Hinton surprised many on Monday when he revealed he’d quit his job at Google where he worked for the last decade on AI projects.
Often referred to as “the godfather of AI” for his groundbreaking work that underpins many of today’s AI systems, British-born Hinton, now 75, told the New York Times that he has serious concerns about the speed at which the likes of Open AI with its ChatGPT tool, and Google with Bard, are working to develop their products, especially as it could be at the cost of safety.
So worried is Hinton that he even said that a part of him regrets his past efforts in the field of AI, the Times reported.
Why pay for GPT-4? This AI tool gives it to you for free, plus more
Even Microsoft thinks ChatGPT needs to be regulated — here’s why
Google’s ChatGPT rival is an ethical mess, say Google’s own workers
As the report points out, generative AI tools are already moving toward replacing human workers, and the technology can also be used for creating and spreading misinformation.
Indeed, one of Hinton’s worries is that that the internet, whose data is used to train generative AI tools, will be flooded with false information that could cause chatbots like ChatGPT and Bard to spit out endless untruths in a way that sounds believable.
As companies behind the technology release their AI-powered tools for public use without fully knowing their potential, Hinton fears that it’s “hard to see how you can prevent the bad actors from using it for bad things.”
Hinton said that regulating the technology will be hard as companies and governments can work on the technology pretty much in secret, adding that one way to deal with the issue is to try to get leading scientists to work together on ways to control the technology.
The AI expert said in a tweet on Monday that he decided to depart Google so that he could “talk about the dangers of AI without considering how this impacts Google,” suggesting we’ll be hearing a lot more from him as the technology continue to develop.
Voicing even greater concerns last month when asked in a CBS interview about the likelihood of AI “wiping out humanity,”Hinton said: “That’s not inconceivable.”
Following news of Hinton’s departure, Jeff Dean, Google’s chief scientist, said in a statement: “We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”
Hinton isn’t the only expert to express concerns about the new wave of AI technology that’s taken the world by storm.
Perhaps surprisingly, the boss of OpenAI, Sam Altman, recently admitted to being a “little bit scared” of the potential effects of AI technology.
And in March, a letter signed by tech leaders and academics claimed the technology poses “profound risks to society and humanity.”
Published by the Future of Life Institute and whose signatories included Elon Musk, the letter called for a six-month pause on development work to allow time for the creation and implementation of safety protocols for the advanced tools.
It added that if handled in the right way, humanity will be able to “enjoy a flourishing future with AI.”