Imagine this: you’re gently awoken by the dulcet tones of your personal assistant just as you’re nearing the end of your final sleep cycle.
A disembodied voice informs you of the emails you missed overnight and how they were responded to in your absence. The same voice lets you know rain is expected this morning and recommends you don your trenchcoat before leaving the house. As your car drives you to the office, your wristwatch announces that lunch from your local steak house has been preordered for delivery since your iron levels have been a little low lately.
Having all your needs anticipated and met before you’ve even had the chance to realize them yourself is one of the potentials of advanced artificial intelligence. Some of Canada’s top AI researchers believe it could create a utopia for humankind — if AI doesn’t eradicate our species first.
While neither new nor simple, the conversation surrounding AI and how it will impact the way we lead our lives can be broken into three parts: whether superintelligence — an entity that surpasses human intelligence — will be produced, how that entity could both improve upon or destroy life as we know it, and what we can do now to control the outcome.
But no matter what, observers in the field say the topic should be among the highest priorities for world leaders.
The race for superintelligence
For the average person, AI in today’s context can be characterized by posing a question to a device and hearing the answer within seconds. Or the wallet on your mobile phone opening at the sight of your face.
These are responses that arise following a human prompt for a single task, which is a common characteristic of artificial intelligence, or artificial narrow intelligence (ANI). The next stage is AGI, or artificial general intelligence, which is still in development, but would provide the potential for machines to think and make decisions on their own and therefore be more productive, according to the University of Wolverhampton in England.
ASI, or superintelligence, will operate beyond a human level and is only a matter of years away, according to many in the field, including British-Canadian computer scientist, Geoffrey Hinton, who spoke with CBC from his studio in Toronto where he lives and currently serves as a Professor Emeritus at the University of Toronto.
“If you want to know what it’s like not to be the apex intelligence, ask a chicken,” said Hinton, often lauded as one of the Godfathers of AI.
“Nearly all the leading researchers believe that we will get superintelligence. We will make things smarter than ourselves,” said Hinton. “I thought it would be 50 to 100 years. Now I think it’s maybe five to 20 years before we get superintelligence. Maybe longer, but it’s coming quicker than I thought.”
Jeff Clune, a computer science professor at the University of British Columbia and the Canada CIFAR AI Chair at the Vector Institute, an AI research not-for-profit based in Toronto, echoes Hinton’s predictions regarding superintelligence.
“I definitely think that there’s a chance, and a non-trivial chance, that it could show up this year,” he said.
“We have entered the era in which superintelligence is possible with each passing month and that probability will grow with each passing month.”
Eradicating diseases, streamlining irrigation systems, and perfecting food distribution are just a few of the techniques superintelligence could provide to help humans solve the climate crisis and end world hunger. However, experts caution against underestimating the power of AI, both for better or worse.
The upside of AI
While the promise of superintelligence, a sentient machine that conjures images of HAL from 2001: A Space Odyssey or The Terminator‘s SkyNet is believed to be inevitable, it doesn’t have to be a death sentence for all humankind.
Clune estimates there could be as high as a 30 to 35 per cent chance that everything goes extremely well in terms of humans maintaining control over superintelligences, meaning areas like health care and education could improve beyond our wildest imaginations.
“I would love to have a teacher with infinite patience and they could answer every single question that I have,” he said. “And in my experiences on this planet with humans, that’s rare, if not impossible, to find.”
He also says superintelligence would help us “make death optional” by turbo-charging science and eliminating everything from accidental death to cancer.
“Since the dawn of the scientific revolution, human scientific ingenuity has been bottlenecked by time and resources,” he said.
“And if you have something way smarter than us that you can create trillions of copies of in a supercomputer, then you’re talking about the rate of scientific innovation absolutely being catalyzed.”
Health care was one of the industries Hinton agreed would merit the most from an AI-upgrade.
“In a few years time we’ll be able to have family doctors who, in effect, have seen 100 million patients and know all the tests that were done on you and on your relatives,” Hinton told the BBC, highlighting AI’s potential for eliminating human error when it comes to diagnoses.
A 2018 survey commissioned by the Canadian Patient Safety Institute showed misdiagnosis topped the list of patient safety incidents reported by Canadians.
“The combination of the AI system and the doctor is much better than the doctor dealing with difficult cases,” Hinton said. “And the system is only going to get better.”
The risky business of superintelligence
However, this shining prophecy could become a lot darker if humans fail to maintain control, although most who work within the realm of AI recognize there are innumerable possibilities when artificial intelligence is involved.
Hinton, who also won the Nobel Prize in Physics last year, made headlines over the holidays after he told the BBC there is a 10 to 20 per cent chance AI will lead to human extinction in the next 30 years.
“We’ve never had to deal with things more intelligent than ourselves before. And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing?” Hinton asked on BBC’s Today programme.
Computer scientist and ‘Godfather of AI’ @geoffreyhinton tells #R4Today guest editor Sir Sajid Javid AI could lead to human extinction within two decades and governments need ‘to force the big companies’ to do a lot of research on safety.
“There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of,” he said.
When speaking with CBC News, Hinton expanded on his parent-child analogy.
“If you have children, when they’re quite young, one day they will try and tie their own shoelaces. And if you’re a good parent, you let them try and you maybe help them do it. But you have to get to the store. And after a while you just say, ‘OK, forget it. Today, I’m going to do it.’ That’s what it’s going to be like between us and the superintelligences,” he said.
“There’s going to be things we do and the superintelligences just get fed up with the fact that we’re so incompetent and just replace us.”
Nearly 10 years ago, Elon Musk, founder of SpaceX and CEO of Tesla Motors, told American astrophysicist Neil deGrasse Tyson that he believes AI will domesticate humans like pets.
Hinton ventures that we’ll be kept in the same way we keep tigers around.
“I don’t see why they wouldn’t. But we’re not going to control things anymore,” he said.
As It Happens6:47‘Godfather of AI’ wins a Nobel for work developing the technology he now fears
And if humans are not deemed worthy enough for entertainment, Hinton thinks we might be eliminated completely, even though he doesn’t believe it’s helpful to play the guessing game of how humankind will meet its end.
“I don’t want to speculate on how they would get rid of us. There’s so many ways they could do it. I mean, an obvious way is something biological that wouldn’t affect them like a virus, but who knows?”
How we can keep control
Although the predictions for the scope of this technology and its timeframe can vary, researchers tend to be united in their belief that superintelligence is inevitable.
The question that remains is whether or not humans will be able to keep control.
For Hinton, the answer lies in electing politicians that place a high priority on regulating AI.
“What we should do is encourage governments to force the big companies to do more research on how to keep these things safe when they develop them,” he said.
However, Clune, who also serves as a senior research advisor for Google DeepMind, says a lot of the leading AI players have the correct values and are “trying to do this right.”
“What worries me a lot less than the companies developing it are the other countries trying to catch up and the other organizations that have far less scruples than I think the leading AI labs do.”
One practical solution Clune offers, similar to the nuclear era, is to invite all of the major AI players into regular talks. He believes everyone working on this technology should collaborate to ensure it’s developed safely.
“This is the biggest roll of the dice that humans have made in history and even larger than the creation of nuclear weapons,” Clune said, suggesting that if researchers around the world keep each other abreast of their progress, they can slow down if they need to.
“The stakes are extremely high. If we get this right, we get tremendous upside. And if we get this wrong, we might be talking about the end of human civilization.”