Earlier this year, two of Silicon Valley’s best-known executives – Elon Musk and Mark Zuckerberg – engaged in a public war of words about the scale of the threat artificial intelligence poses to the future of the human race.
Much to Zuckerberg’s ire, Musk, the CEO of Tesla and SpaceX, warned politicians that AI is humanity’s greatest existential threat, and that they must regulate the industry before it’s too late to do so.
Days later, the Facebook founder described Musk as a “naysayer” who was drumming up doomsday scenarios. “I just, I don’t understand it,” he said. “I actually think it is pretty irresponsible.”
To Stuart Russell, a professor at the University of California, Berkeley and a world leader in AI research, Zuckerberg’s reaction was reflective of a common attitude in some quarters of the tech industry.
“I see this a lot. People who are pro-technology feel attacked when you say there could be this risk,” he tells NS Tech at IP Expo Europe. “Really I don’t agree with what he [Zuckerberg] is saying. He characterises this as anti-AI; Elon is the last person who is anti-AI. It’s sort of like saying if someone points out that nuclear reactors can have meltdowns, they’re anti-physics. It just doesn’t make sense.”
In January 2015, Russell drafted an open letter calling for researchers to build artificial intelligence that is robust and beneficial, rather than simply all-powerful. “Our AI systems must do what we want them to do,” he wrote.
After the letter was published online, thousands of people, including senior researchers at some of Silicon Valley’s most influential firms, added their signatures. It prompted Musk, the 37th signatory, to provide funding for researchers dedicated to creating artificial intelligence that is beneficial to society. He went on to formalise that mission by founding OpenAI in October of the same year.
Eleven months later, Amazon, Facebook, Google, DeepMind, Microsoft and IBM banded together to form Partnership on AI: an industry consortium committed to establishing best practices for artificial intelligence and educating the public about its potential.
Given what it inspired, future generations may look back on Russell’s letter as a defining moment in the early development of AI. It’s perhaps unsurprising then that the British-American scientist is now broadly optimistic about what the technology could achieve.
“The principles [of Partnership on AI] are pretty clear, somewhat idealistic,” he says. “As we develop really valuable AI, we’re going to share the benefits and share the technology. Initially that sounds crazy – companies aren’t going to share that – but the point is that if you have really powerful AI, to the first approximation you solve the problem of scarcity. You solve the resource constraints that the world has operated under since the beginning of time, and it doesn’t matter.”
Facebook, Google, Amazon, Apple and Microsoft are spending billions of dollars on the race to solve intelligence. Given the size of their investments and the intensity of the competition, how likely is the victor to share the spoils of their labour?
“I think they would,” says Russell. “Honestly, if everyone is as wealthy as they want, it doesn’t matter if someone else is a thousand times wealthier. Who cares? I don’t care how wealthy Warren Buffett is so long as there isn’t resource competition. I think that one of the reasons to pursue AI is this very general notion that by eliminating competition for resources, you eliminate the problems that the world has.”
But while Russell is optimistic about the benefits AI could confer, he stresses the significance of building machines that are more than just intelligent. “What we want is machines that are beneficial, to us in particular,” he says in his keynote. “But it needs to be provably beneficial. If you give an objective to a machine, you’re giving it a reason to stay alive. If a machine is asked to make a cup of coffee, it will know that it can’t do that if it’s switched off – the exact plot to 2001: A Space Odyssey.”
He urges governments, businesses and the education sector to start preparing now for the disruption AI will cause: “In the short-term, it’s clear that we face a shortage of scientists and robot engineers and people with all the necessary technical training, but that’s not a long-term position, because we don’t need a billion data scientists and robot engineers, not even maybe one per cent of that.”
Instead, Russell says the most valuable attribute of any human is humanity itself. “Humans have a particular capability to improve each other’s lives in a whole host of ways.” There are some jobs, he says, that already work this way, but argues that we lack scientific understanding of how to do them well.”
“If you teach someone how to enjoy literature, that’s a gift for life that dramatically improves their quality of life and at the moment it’s invisible in GDP figures and we have very little idea of how to do it effectively,” says Russell. “We don’t invest in research on these questions; we invest in research on how to make cell phones and plastic bottles of water, but these are things humans will be doing in the long run. We have to make sure that they’re doing it with sufficient scientific foundations and training and professionalisation that they add value.”
Russell isn’t willing to speculate on when general intelligence will be solved, but, like Musk, is in favour of regulation. However, he stops short of calling for regulation of basic research, arguing that it would be impossible to police. “The only regulations that would make sense are regulations of specific uses of AI, such as self-driving cars.”
During the interview, Russell keeps coming back to the parallels between the AI sector now and the early nuclear industry. The latter, he says, offers AI visionaries such as Zuckerberg a lesson they must heed: “Let’s face it; the nuclear industry was in an extremely adversarial relationship with a lot of people who were worried about nuclear safety.
“They started to believe their own propaganda, which is that nuclear power is completely safe and completely clean and has zero pollution and zero risk, and then Chernobyl happened because they weren’t paying enough attention to the risks and that wiped out their own industry. It wasn’t a good outcome for anybody.”