So Bored with the Holocene Scene

Book Writer & Publisher: 
Writer(s): 
Scott Gardner

Indications are that cats will be the species eventually relieving humans of their role as Dominators of Earth. This is supported by research from both Japan (“Doraemon” cartoons) and the UK (a TV series called “Red Dwarf”, which used a small but well-controlled data sample: one cryogenically frozen human and one English-speaking, Teddy Boy-dressing, super-evolved cat, both of whom represent the sum of evolution aboard a derelict spaceship that has been drifting away from Earth for three million years). The Über-cat Hypothesis is also backed by anecdotal evidence from my own house, in which I have frequently caught Lilly, our one-year-old Shorthair mix, shifting her gaze malevolently between me and the shiny meat cleaver hanging above the kitchen stove.

Homo sapiens is just about due to be overrun anyway, isn’t it? We must admit that as a species we have evolved ourselves into kind of a corner, without much to show for our efforts: pinky fingers just the right circumference for inserting in our nostrils; long, unprotected shinbones that operate mainly in spaces prone to painful encounters with table legs and potted plants; and eyeballs that seem content to do little else than watch hour after hour of television programs with titles like “America’s Next Top Anger Manager.”

We won’t be the first great species to fall. It happened to the lichen fungi and other prehistoric plants that were pressed like wine grapes into the earth, then fermented into the petroleum products that have been inebriating civilization for the last century. It happened to the dinosaurs, who chose to grow outsized teeth and horns for fearsome posturing rather than develop a good set of opposable thumbs, which might have been useful when the meteors started falling. It happened to the Australian Baseball League.

There are other theories out there, however, saying that the next Evolutionary Eviction will be bigger, at least in terms of sheer irony. We may ultimately be conquered not by another naturally ascendant life form, but by the products of our own intelligence, creativity, and industry. Through our headlong development of Artificial Intelligence, we may be literally manufacturing our own destroyers. Those of us familiar with English literature should recognize the Frankenstein-ian overtones of such a prospect. And we teachers may also relate to it whenever we look at a classroom full of “monsters” and wonder where we went wrong.

But some scientists are sounding very serious alarms about a looming tipping point of AI development and saying that we should be having more conversations about it—preferably in forums other than humor columns on the back pages of minor journals. They say that the time is not far off when AI machines will be able to learn on their own, faster than any human can program them. Once learning—including learning how to self-replicate—is out of human hands, we’ll be about as useful to AI machines as training wheels on a moon rocket booster. 

The philosopher Nick Bostrum introduced a grim thought experiment called the “Paperclip Maximizer,” in which an AI machine that’s been programmed to make paperclips as efficiently as possible might eventually teach itself how to overrun all the world’s people, arsenals, and resources simply for the purpose of making more paperclips. My version—the “Bathtub Cleaner” Thought Experiment—is much simpler: 1. You program an AI machine to keep your bathtub clean. 2. The machine kills you, easily eliminating the only obstacle to a permanently clean bathtub.

The legendary science fiction writer Isaac Asimov once outlined a set of “Laws of Robotics” that would conceivably keep any artificially intelligent machine in check regarding its interactions with humans. These laws included strict prohibitions on injury to the robot itself, to people around it, and to humanity in general. I suppose that if we can hardwire “moral” laws like these into an AI machine’s brain, we may be able to trust it with self-awareness and unlimited learning powers. And, contrary to the gloom of Bostrum’s “Office Supplies and Genocides” scenario described above, having AI machines at our side could actually prove immeasurably beneficial to future humans’ survival. After all, there’s nothing in Asimov’s Laws about what robots are forbidden to do to cats.