The arrival of a new generation of artificial intelligence chatbots and apps has fueled hysteria that humans may soon become obsolete, or worse, the victims of a Skynet scenario, in which our AI creations become sentient and turn against us. Even the biggest AI boosters recently called for a moratorium on further research until we can better assess the risks.

The perils posed by today’s technology may well be new and noteworthy, but our anxiety is not. For two centuries, humankind has fretted about what might happen if we endow our creations with intelligence, fearing they will go rogue, if not replace us entirely.

The idea that artificial helpers could rebel has many antecedents, including different variations on the story of the sorcerer’s apprentice, popularized by Johann Wolfgang von Goethe (and later, Walt Disney), as well as the Jewish golem, mythical clay creatures brought to life by mystical incantations. Though folk tales held that most golem served humanity, more secular versions of the story circulating in early 19th century Prague depicted a far more disobedient, destructive monster.

This version of the golem likely informed one of the first modern visions of artificial life and intelligence: Mary Shelley’s Frankenstein, published in 1818. Unlike Hollywood’s rendering of the story, Shelley’s original tale recounts a hyper-intelligent creature that absorbs the world around him, swiftly learning how to speak, read poetry and grasp human emotions. But humans had no appreciation for those feats, seeing only a monster, so the “monster” eventually turns on his creator.

Shelley’s story inspired what Isaac Asimov would derisively dub the “Frankenstein complex” — the fear that our doppelgangers will become sentient and replace or destroy their human creators. Still, Shelley’s monster was a thing of flesh and blood, not steel and circuitry. It was not a murderous android.

How, then, did we get from Frankenstein to The Terminator? Blame Charles Darwin. When Darwin’s first writings on evolution appeared in 1859 it became clear that humanity, far from walking out of the Garden of Eden fully formed, instead had been the product of endless evolution. This raised the equally troubling possibility that humanity, like other long-gone species, might well be supplanted by something superior.

Advertisement

From there it was only a short conceptual leap to imagine that machines, already stronger than humans, might one day become smarter, too. Four years after the publication of Darwin’s On the Origin of Species, British writer Samuel Butler published an essay under a pseudonym that anticipated virtually all of our current anxieties about AI run amok.

In “Darwin Among the Machines,” Butler observed that “we are ourselves creating our own successors . . . we are daily giving [the machines] greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race.” When that process came to its culmination, predicted Butler, “man will have become to the machine what the horse and the dog are to man.”

Butler’s dark vision of a future dominated by immortal, hyper-intelligent machines would resurface in his widely read utopian novel, Erewhon . The title, an anagram of “nowhere,” told the story of a lost primitive land where technology was conspicuously absent. The narrator eventually learns the evolution of machines had been deliberately halted and reversed in the distant past to prevent “the ultimate development of mechanical consciousness.” The inhabitants of Erewhon had concluded that a six-month moratorium wouldn’t do.

Not every fictional society was so lucky. In the late 1880s, British novelist Reginald Colebrooke Reade wrote twin dystopian novels that described a Terminator-style scenario, complete with intelligent machines that revolt against the human race, nearly driving it to extinction. These works were products of their age: the omniscient, Skynet-style machine intelligence begins with a railroad locomotive that becomes sentient, eventually enlisting all machines in its revolution against humanity.

These and a handful of other works of science fiction anticipated the more famous work of Prague playwright Karel Čapek, whose play R.U.R. gave us the word “robot.” Čapek’s story told the rise and fall of Rossum’s Universal Robots, a firm that creates humanoid machines that become ever more life-like. Čapek described his play as “a transformation of the Golem legend into modern form . . . Robots are Golem made with factory mass production.”

In the play, the robots realize they are superior to their makers and opt to kill off the humans, becoming increasingly skilled at the task over time. At one point, one of the humans, reading a threatening missive from the robots, marvels at the machines’ growing facility with language. “Good heavens,” he declares, “who taught them these phrases?”

Advertisement

Čapek’s play, translated into many languages, spawned an entire dystopian genre of science fiction in which intelligent machines, created to serve humankind, revolt against their masters. As time went on, additional ingredients helped flesh out fears of artificial intelligence still further.

The first new ingredients were the development of the computer and associated research into artificial intelligence. Anxieties about these developments obsessed science-fiction writers in the postwar era. Some, like Asimov, wanted to imagine a world where AI would be servant, not master. But most writers, like Frank Herbert, who published Dune in 1965, embraced the Frankenstein complex.

Herbert’s sprawling epic, set thousands of years in the future, described a world after the “Butlerian Jihad” – a war against thinking machines. This resulted in an Erewhonian world where the one overriding law declared, “Thou shalt not make a machine in the likeness of a human mind.”

Hollywood got into the act as well with Stanley Kubrick’s 2001: A Space Odyssey, starring a murderous computer. But Kubrick’s HAL was a piker compared to the next generation of fictional sentient computers. A decade before Skynet became sentient and destroyed humanity in the Terminator franchise, Colossus: The Forbin Project told the “frightening story of the day man built himself out of existence” by creating “Colossus,” a super-intelligent computer given control over the nation’s nuclear arsenal.

Colossus — the name nodding to Alan Turing’s wartime code-cracking computer — quickly becomes self-aware and hooks up with its Soviet counterpart, who has also become sentient. Together the computers threaten to nuke the world unless they’re put in charge of mankind. The humans try to rebel but fail, becoming the dependents of all-powerful computer babysitters armed with nukes.

Though our angst about AI has grown even creepier in recent years — here’s looking at you, M3gan — what’s far more interesting is how little has changed in our thinking for close to a century. All the anxieties now making the rounds have a long and storied history, from fears of human obsolescence to predictions that AI will become a willful, malevolent force.

We’ve now seen dramatic advances in artificial intelligence made over the past year, edging us closer to the kinds of machines envisioned in many of these apocalyptic stories. You may or may not find it comforting that mankind has been pondering the possibility of these frightening outcomes for more than a century, but at least knowing our deep history of skepticism helps put current reactions to AI in perspective. And that’s something that, for now at least, only a human can do.

Stephen Mihm, a professor of history at the University of Georgia, is coauthor of “Crisis Economics: A Crash Course in the Future of Finance.”


Only subscribers are eligible to post comments. Please subscribe or login first for digital access. Here’s why.

Use the form below to reset your password. When you've submitted your account email, we will send an email with a reset code.