Elon Musk, the mastermind behind SpaceX and Tesla, believes that artificial intelligence is “potentially more dangerous than nukes,” imploring all of humankind “to be super careful with AI,” unless we want the ultimate fate of humanity to closely resemble Judgment Day from Terminator. Personally I think Musk is being a little hyperbolic — after all, we’ve survived more than 60 years of the threat of thermonuclear mutually assured destruction — but still, it’s worth considering Musk’s words in greater detail.
Musk made his comments on Twitter yesterday, after reading Superintelligence by Nick Bostrom. The book deals with the eventual creation of a machine intelligence (artificial general intelligence, AGI) that can rival the human brain, and our fate thereafter. While most experts agree that a human-level AGI is mostly inevitable by this point — it’s just a matter ofwhen — Bostrom contends that humanity still has a big advantage up its sleeve: we get to make the first move. This is what Musk is referring to when he says we need to be careful with AI: We’re rapidly moving towards a Terminator-like scenario, but the actualimplementation of these human-level AIs is down to us. We are the ones who will program how the AI actually works. We are the ones who can imbue the AI with a sense of ethics and morality. We are the ones who can implement safeguards, such as Asimov’s three laws of robotics, to prevent an eventual robocalypse.
In short, if we end up building a race of superintelligent robots, we have no one but ourselves to blame — and Musk, sadly, isn’t too optimistic about humanity putting the right safeguards in place. In a second tweet, Musk says: Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Here he’s referring to humanity’s role as the precursor to a human-level artificial intelligence — and after the AI is up and running, we’ll be ruled superfluous to AI society and quickly erased.
Personally, I think Musk is underestimating the tenacity andjoie de vivre of humanity. Ever since the detonation of the first atomic bombs in Hiroshima and Nagasaki, and the later development of thermonuclear ICBMs that can wipe out whole countries from the other side of the world, humanity has assumed for more than 60 years that the end of the world was always just around the corner. But, hey, we’re still here — and despite occasional geopolitical spats, thermonuclear mutually assured destruction seems less likely by the day.
In other words, yes, it’s possible that we’ll create a superintelligence that turns on its creators — but given how humans are really, really keen to survive, it’s unlikely. Yes, it’s possible that some rogue programmer will create a genocidal AI in his basement — but given the monumental resource requirements and the interdisciplinary specialization necessary for the creation of a human-level machine intelligence, I think this is unlikely. One argument against the proliferation of nuclear weapons is that, while developed nations are reticent about mutually assured destruction, crazy dictators wouldn’t think twice about pushing the big red button. Again, though, despite a lot of close calls, the Atomic Age has still only ever seen the detonation of two nuclear bombs — and that was to end a war, rather than start one. While I agree it’s possible that a “lone crazed programmer” might in theory develop a genocidal superintelligence, again, just like the crazy dictator, I think that’s selling short humanity’s most basic urge to survive.
In any case, hopefully Musk will heed his own words of caution. Musk went on the record last year to say that Tesla would put a self-driving car on the road within three years — a task that requires fairly deep and complex artificial intelligence. Of all the Silicon Valley behemoths, Musk’s companies and Google must be two of the most likely to develop human-level machine intelligence.
Featured image by Art Streiber, for Wired