Pray for the Robots

You may have heard of the Turing Test for determining what is a true Artificial Intelligence. Though there are many versions of it, the basic version of the Turing Test is that if a human being is unable to distinguish between a machine and a human, then the machine is, for all intents and purposes, intelligent. Of course, there are potential philosophical objections from Descartes onward, and lots of caveats about how such a test might be accurately conducted, for how long the human  observer must not be able to distinguish them, what percentage of observers cannot distinguish them, etc., but the Turing Test acts as a rough-and-ready benchmark. By the way, if you want to know more about AI and philosophy, I would refer you to Damien Williams (“Wolven” for those of you in the nerd world), who thinks more about these things in a day than I do in a year.

Lately I’ve been thinking about the application of the Turing Test in my own life, and the way I (a human, I promise) seem to be failing it. As some of you might know, there is a character in Star Trek Online that is named for me. The back story is simple: I have friends who worked on the game and ran out of inspiration for names, so they used mine. The character looks nothing like me, doesn’t act like me, talk like me, etc. It’s literally just a name over an NPC (Non-Player Character).

Now, aside from affording me a weird teeny bit of fame, I’ve found a small handful of people who believe that the NPC really is me — that when they’re playing the game, I’m somewhere else off in the world controlling that avatar in real time. Now, at first thought, this is a quaint way of thinking about the internet, that every barkeep in World of Warcraft is some bored Blizzard employee typing the same script and selling the same items over and over. In this case, however, the people know me, so they are literally incapable of distinguishing me from an NPC with my name.

So, does that mean that the “Cadet Scott Nokes” has passed the Turing test, and is a true Artificial Intelligence, not only passing itself off as a human, but as a particular human? I don’t think a reasonable person would accept that premise, since “Cadet Scott Nokes” isn’t a particularly sophisticated simulacrum of life — heck, it probably isn’t even the most sophisticated one in Star Trek Online! I think we would chock this up to the problem of a naive observer.

But this then leads to the problem of naive observers: What if a significant percentage of STO players believed that Cadet Scott Nokes is a person? We would then (with some caveats) say that it has passed the Turing Test and is a intelligent. But in the real life case, the human observers also know me, which is the only reason they took any kind of interest in this insignificant NPC. What if the majority of human observers who know me not only believe Cadet Scott Nokes to be a human, but believe it to be me? In this case, we have got two intelligences, but they are indistinguishable. It’s not that I’ve failed the Turing Test, but rather that a human has become indistinguishable from a machine.

For at least for a small number of observers, we are there already. We can just hand-wave them away as “naive observers,” of course, but at what point is that no longer possible? At Turing’s 30% benchmark? Over half? Nearly everyone? But unlike Rick Deckard, I’m unable to point to myself and say, “I am an artificial intelligence.” In fact, so far as I can be certain, I am the only natural intelligence in the world. I know I’m “real” — it’s the rest of you who might be robots. No matter how much you try to persuade me, even if there is absolute consensus on the point from every other observer, I’ll never believe that Cadet Scott Nokes is “real,” and I’m the simulacrum of Scott Nokes. My own experience is too strong a warrant to be defeated by any percentage of consensus.

I’ve tried to be careful about my use of the words “real” and “artificial” here because I’m getting to a theological point.  From the perspective of the Turing Test, intelligence is intelligence is intelligence. The question of it being “artificial” is one of origin, that is to say that it is an artifact results from the artifice of another intelligent agent.  In other words, “artificial intelligence” is a “created intelligence,” and thus is not distinguishable from other created intelligences — i.e., humans.

The truth is, then, that from the perspective of the naive observers who think Cadet Scott Nokes is me, they are then bound by the same moral duties to treat it as they would treat me. If they wouldn’t curse me out, and they believe Cadet Scott Nokes is me, then cursing it out it, from their moral position, the same as cursing me out.

But remember, in the end, the only intelligence I can be certain of is myself, so let’s assume, for the sake of argument, that I am the only “real” intelligence in the world — that everyone else is just an artificial intelligence, and I have naively assumed them to be real. I am just as morally bound to them as I am to the only other intelligence, i.e. me.

Sound familiar? It should: “So in everything, do to others what you would have them do to you, for this sums up the Law and the Prophets” (Matt 7:12). We’ve backed into the moral imperative of the Golden Rule. This isn’t a far off, science-fictiony issue — it’s one we have to account for soon. With naive observers already failing to distinguish between machines and humans, we’re not far from sophisticated observers being unable to distinguish between them. Indeed, we’re not far from you being unable to distinguish between them. From the Christian perspective, the response is pretty obvious: If you believe something to be intelligent, you treat it as your neighbor, until you have sufficient warrant to believe otherwise. The claim of Jesus that the Golden Rule is the essence of the Law and the Prophets means that this is encoded into the entire cosmos.

For non-Christians, then, observers run into a few choices: The most natural and philosophically-defensible is a Nietzschian master-slave morality, but the essential problem with that is we might find ourselves forced to adopt a “slave morality” of subversion to our AI masters. We could go with Utilitarianism, though since an AI could calculate outcomes of “human flourishing” (a category that would presumably also include AI flourishing in this scenario) far better than humans could, we are left with complete dependence on the judgments of our benevolent AI moral judges. It’s not really possible to exhaustively list the potential ethical frameworks and analyze their various benefits and pitfalls, but it’s certainly time for even the layman to start thinking about it.

As for me, I have a prior moral engagement with the Christian framework, so I’ll keep trying to navigate the morality accordingly. I know Cadet Scott Nokes isn’t me, and I have no serious reason to think it is a true intelligence, so you should feel free to treat it as you would any other game NPC. But if you ever log in to Star Trek Online and Cadet Scott Nokes says something like, “Hey, did you read that article the other Scott Nokes wrote about me? It really got me thinking I should pray more,” you might consider being much more thoughtful in the way you treat my namesake.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.