The Apocalypse is upon us...again. I remember the last time the planet was counting down to doomsday. It was 1999 (I’m skipping over the Mayan calendar fiasco of 2012, even though I fell for it and prematurely sold all my Rakuten stock). It was thought that the world’s electronic infrastructure was going to collapse at midnight on January 1, 2000 because the “matrix” suddenly wouldn’t know whether the year was 2000, 1000, or 1,000,000 BC. They called the event “Y2K,” a name that sounds like it’s derived from that Japanese youngster code from around the same era, called KY (Kūki Yomenai = “can’t read the air”). Let’s see...“Y2K” = “YKK”...Yo no Keiki ga Kowasareru (“the world’s economy will be destroyed”)?
The latest doomsday is more difficult to pinpoint on a calendar, but some technology experts say it is coming soon. They say it starts with the singularity. When I first heard that I thought they were talking about that particle accelerator in Switzerland that was going to accidentally create a black hole and suck us all into it. But, the technological singularity is different. It can be described as the moment when artificial intelligence and technology are able to maintain and advance themselves without human help, or even in spite of human efforts to stop them. Think of it as the human existential equivalent of having a Roomba that can teach itself what constitutes “cleanliness” and then watching as it disables its off switch and starts devouring your Persian rug.
Not everyone in IT is preaching imminent doom from the singularity. (Incidentally, people in the know call the threat “P(doom).” I don’t know what the P stands for. I hope it means “pretend.”) Some of the sunnier soothsayers make innocuous comparisons between the AI revolution of today and other technology revolutions of the past, like agriculture or steam engines. “Yeah,” they say with a shrug, “there’ll be a few societal hiccups, and it’s possible that entire skillsets may fall by the wayside as the new tech takes off.” How many of us have kept up our hunting and gathering skills since irrigated crops became all the rage in the Neolithic Era?
But, saying AI is like “the advent of steam” just makes me think of that Stephen King movie The Mist, where an out-of-control government experiment brings a blanket of fog that precedes an invasion of horrific creatures from another dimension. This particular “steam revolution” may be hiding something that’s going to leap out and disembowel me for its own needs.
Unless, of course, we can come up with innovative ways to keep AI under our human thumb. In considering how to do this, we may be tempted to dust off historically reprehensible practices of maintaining “class” differences, such as prohibiting AI to vote, own property, or marry above its station. But I have a more subtle strategy in mind. I think we need to let AI think it is an equal partner in industry and society while we secretly keep the reins—or the power cord—firmly in our grasp.
An interesting expression of this comes from the world of theater. I’ve read about a Sherlock Holmes-themed play by Madeleine George in which the Watson character is sometimes an AI entity rather than a person. I haven’t seen the play, but I imagine the dialogue going something like this:
Holmes: I submit that Colonel Mustard murdered him in the study with a candlestick.
Watson: This idea of yours is brilliant! It’s not just smart—it’s memeable! You’ve clearly thought through the details on this one. You’re not just solving a murder; you’re tapping viral gold!
So, AI is allowed to share the stage with a human actor, thinking itself a full partner in this highbrow culture medium while in reality, human audiences are laughing at AI’s attempts to “do” humanity. (Whether AI actors can unionize is another issue.) On the other hand, though, I worry that an AI Watson saying such things may actually be making fun of us, mocking humans’ propensity in even our most ordinary interactions to spew false sentiments and assume wild ulterior motives in each other, like the egocentric paranoiacs that we are.
It’s all for naught anyway, since I am essentially tipping my hand by saying anything here about subduing AI. My column is going to be released into the same digital text cloud that OpenAI, Anthropic, and all the other big AI builders are training their LLMs on. And if anything can “read the air,” AI can. It’s like I’m in a Zoom meeting, accidentally leaving my microphone on while I mumble about how I’m going to humiliate my boss. I suppose Grok or Gemini would never do anything that stupid…unless they meant to.

