Bot or souled?

Scott Gardner

At some point in childhood you probably developed suspicions that one or more of your schoolteachers was not human. This instinct was of the same type as the one triggered when the first non-maternal humanoid lifted you out of the bassinet, inducing one of many blood-curdling baby screams you would hurl at the world over the next few years. As we get older our skill in detecting replicants among us is tested and refined, but even as late as high school we may still be unprepared. Does math teacher Mrs. Legion’s icy gaze really induce comas in students? Do Mr. Dyson’s torturous science tests betray him as the Terminator, or at least the Midterminator? Is Mrs. Rossum’s robotic demeanor connected with the fact that she steals batteries from all the Nintendos and cellphones she confiscates?

Robot teachers may seem improbable, but it’s true that we have surrendered many daily routines to automated devices. Coffee makers can be programmed to start brewing before we wake up. Machines take our money in the parking lot, and thank us for it with recorded voices. Drones hover over our homes, delivering giddily awaited cardboard-encased online purchases. (“Mommy, where do babies come from?” “Amazon. And Prime customers get a free e-book to boot!”) We can even program machines to do things we never needed done in the first place. A recent Harvard creation can fold itself up, origami-like. It starts out relatively flat, turns in on itself a few times, and ends up in 3-D with legs. It’s like a lava lamp for techno-military wonks. And—finally!—there are androids that will do yoga on your kitchen table so you don’t have to.

Not all automation is as harmless as yoga-bots, though. People’s livelihoods have been affected. Take my parking lot example above. The garage near my building used to hire a guy to stand in a booth, take money, and say thank you. Now a motorized box does it all, while this guy sits in an office upstairs doing nothing until the moneytaking/thanking box breaks down (typically when I’m using it). He comes down with a screwdriver to fix the box—and I have to thank him for doing it.

Robots may become so versatile that they can replace us in every way. In the 1950s logician Alan Turing proposed a test to see if a computer brain could pass as “human”. The Turing test involved a person asking questions of two unseen respondents, one human and one computer. The questioner would try to determine which of the two respondents was the soulless machine. (A popular TV show called “The Dating Game” was based on this concept.) 

Little did Turing know that nowadays digital interaction on the Internet would be full of Turing tests. How do you know that the email from a Zimbabwean university offering two-month PhDs was created by a living human being? You don’t. There is a website, though, where you can practice distinguishing human and nonhuman dialogue by “conversing” with an ever-expanding database of utterances. It’s called Cleverbot, and below is an excerpt from an actual conversation I recently had with it:

User (me): We met once, at a tech show in Vegas.

Cleverbot: I don’t know what is candy...

User: Uh...candy is sweet.

Cleverbot: What if sweets is candy?

User: Then “candy is sweet” would be a tautology.

Cleverbot: Why? Can’t two bots have a civil conversation with each other for no reason other than to be sociable?

So, not only did I suss out that Cleverbot was an inhuman “bot”, but the program also reached the same conclusion about me. Very clever! I don’t know if Turing ever considered machine-to-machine interaction, but that might make a good question on a high school science test: “If you and I were both robots, and you were to fail this exam, with whom would the fault lie? A: you as student; B: me as educator; C: our weak and mortally flawed human creators; D: I don’t know what is candy...”