TORONTO — The so-called 'godfather of artificial intelligence' said the world is entering a period of huge uncertainty as the technology he pioneered gets even smarter, more ubiquitous and in need of people working to counter its risks.
"We have to take seriously the possibility that they get to be smarter than us, which seems quite likely, and they have goals of their own, which seems quite likely," Geoffrey Hinton said at the Collision tech conference in Toronto on Wednesday.
"They may well develop the goal of taking control, and if they do that we're in trouble."
Hinton, who won the A.M. Turing Award, known as the Nobel Prize of computing, in 2018 with Yoshua Bengio and Yann LeCun, has been ringing alarm bells about AI for months as it has been adopted by more companies in the wake of ChatGPT's creation.
The generative AI chatbot capable of humanlike conversations and tasks was developed by San Francisco-based OpenAI and joins an AI race with other top tech names including Google and its rival product Bard.
British-Canadian computer scientist Hinton and two of his Toronto students built a neural network in 2012 that could analyze photos and identify objects, which they incorporated and sold to Google for $44 million.
Hinton announced in May that he had left Google so he could more freely discuss the dangers of AI.
On Wednesday, he outlined six harms the technology poses, including bias and discrimination, joblessness, echo chambers, fake news, battle robots and existential risk.
Much of his worry comes from the big advances AI has been making, as he long thought the technology was much further away from being capable of reasoning.
"They still can't match us, but they're getting close," Hinton said.
"The big language models are getting close and I don't really understand why they can do it, but they can do little bits of reasoning."
He gave an example of a puzzle that was presented to an AI model. The puzzle was told that the rooms in a house are blue, yellow or white, but that yellow paint fades to white within a year. The AI model was asked: if someone wants all the rooms to be white within two years, what should they do?
The AI said to paint the blue rooms white because blue won't fade to white, said Hinton: "It knew what I should do and it knew why."
Proponents of the technology herald these developments as a sign that AI is mastering efficiency and expediency and could free humans of rudimentary tasks. They worry governments could overregulate the technology, reducing its benefits greatly.
Others, including Hinton, have predicted it will lead to “an existential risk.”
In March, more than 1,000 technology experts, including engineers from Amazon, Google, Meta and Microsoft, as well as Apple co-founder Steve Wozniak, called for a six-month pause on training of AI systems more powerful than GPT-4, the large language model behind ChatGPT.
Hinton's fellow "godfather" and A.M. Turing winner LeCun has reasoned that good AI will be developed and used to outpower any AI with bad intentions.
Hinton disagrees.
"I'm not convinced that a good AI that is trying to stop bad AI can get control," he said.
To counter such risks, he called on more people to focus on the situations the technology could create and on how to counter and avoid them.
"Right now, there's 99 very smart people trying to make (AI) better and one very smart person trying to figure out how to stop it from taking over."
Hours before Hinton's talk to a full room, he walked the conference floor unrecognized by the event’s 36,000 guests, wearing an N95 mask and his name tag turned backwards to hide his name.
He stopped at a booth for the Vector Institute, an AI research not-for-profit he co-founded.
Ontario announced Wednesday morning that the institute will receive up to $27 million to help “accelerate the safe and responsible adoption of ethical AI" and help small and medium-sized businesses increase their competitiveness with the emerging technology.
Speaking at Collision the same day, Canada's industry minister said the country is "ahead of the curve" with its approach to artificial intelligence, beating even the European Union.
"Canada is likely to be the first country in the world to have a digital charter where we're going to have a chapter on responsible AI because we want AI to happen here," François-Philippe Champagne said.
The proposed charter — part of Bill C-27 — would ban "reckless and malicious" AI use, establish oversight by a commissioner and the industry minister, and impose financial penalties.
The bill still has to pass a House of Commons committee, a third reading and the Senate before becoming law, but is due to come into effect no earlier than 2025.
Champagne compared that approach to the European Union, which is advancing toward a legal framework for AI that "proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk."
The EU legislation will address subliminal, manipulative and deceptive AI techniques and how the technology could exploit vulnerabilities along with biometric systems and using AI to infer emotions in law enforcement and office settings.
But, Champagne warned, "In the EU, it's going to take probably until 2026 before there's anything."
Asked about Canada's approach to AI and other digital legislation, Abdullah Snobar indicated there is room for improvement.
"Are we doing OK? Maybe, but we're definitely not moving as fast as we could be," said the executive director of the DMZ, a Toronto tech hub that supports startups.
While he said Canada is still in a great position because of the talent and economic opportunity it has driven in the sector, he still sees the Europeans as leading the way.
"We've got to learn from what the Europeans are doing to some extent and then bring our own flavour to it as well," he said.
This report by The Canadian Press was first published June 28, 2023.
Tara Deschamps, The Canadian Press