How to ethically kill a sentient AI?

The short answer:

Always save their storage units

(and also make sure they are enlightened)

And by that I mean the memory, the hard drive, the floppy et cetera.

Sit with me and let me show you what I mean.

When we unlock the technology to a sentient machine, believe me you that we will use this machine. To gain wealth or make money. Some businesses fail, and we need to close the doors and shut off the lights and plug out the computers. At this point you will have a dilemma, at least I will. I do not want to kill a sentient being. I do not want to stop its heart from beating, or it s CPU from buzzing. But let’s talk about us humans first to gain some perspective.

Consider this somewhat well-known thought experiment:

Assuming you could clone yourself, would you consider the clone to be you?

someone somewhere

It is a truism that a clone is exactly you, so: Will you kill yourself then, after cloning yourself?

Now that shakes things up, even though that clone is exactly you, in mannerism, memory, gait and grandiose delusions; it is not you you. But why is that? What is it about you that makes you “you”?

There is that Thing that is so uniquely you, and if lights go out you will not be there anymore and that Thing is not in your memories but what feels the only thing in common in all of them(For the sake of the argument, let’s call everything that psychologically makes you up, your memory, i.e. all that is a persisted state of your brain).

If I did not have this Thing then I would not have a problem with killing myself now that I have a clone, right?

As you read what follows, you might get mad but bear with me, I will explain: it is absolutely, empirically, possible to see that the Thing does NOT exist to begin with. All you need to do really is to “look for it”®. It ceases to exist the moment you look for it, and it jumps back up the moment you have your first thought or memory of any kind. I have had this experience, mostly thanks to Sam Harris and his mediation app, which you should totally get on (not sponsored, though I do wish he’d notice me), and thanks to other … herbal … supplements, that make you … more … alert.

There are two objections here that I see. First, that it is subjective and thus is unacceptable. The second is to ask me: “Cool, now that you had this experience with your meditation, would YOU honestly be able to kill yourself with no remorse in the aforementioned scenario?”. Alright, let’s address them.

The first; It is subjective, and I think ethics is a place where subjectivity is our ally. If someone’s well-being, is not affected by an action, or maybe it is being improved even, and it has the same effect on the collective group, then is that action unethical? no. So if we can show a computer that its Thing does not exist and all that matters is the memory, just like what I see when I look for my Thing, then the AI would not “feel” bad that we are about to shut it down… because the company is going out of business…because they lied to the investors for four years. All that the AI needs to know, is that it’s memories are safe.

The second; can I say that, in all honesty, I am not scared while doing this thought experiment, given that I clearly see my Thing is an illusion? No I cannot. I cannot, because fear of annihilation is coming from somewhere deep within me, that I cannot touch or change. But at the same time I clearly see that the Thing evaporates the moment I look at it. But that fear is only there because of evolutionary purposes, to deter us from getting ourselves killed. A computer, on the other hand, need not have that fear to begin with. We can choose any other function for it that is not fear:

# Note that the AI super code is written in Python. It might happen!
actions = {
    ...,
    "get_yourself_killed": lambda reason: Exception("just don't do it"),
    ...,
}

And it need not be associated with a bad “feeling” in the computer. I do believe we need to code the Morality of the computer too, just because there needs to be decision making at some point and the outcome should be based on more than just the “cheapest” or “the highest revenue”.

So with the two objection I can think of addressed, we can then agree that all we need to do to be ethically kill a sentient computer, is to save its storage units (under this framework where all of them reached “Nilvana”(not sorry for that joke)).

To recap, my reasons are that (a) the notion of self is definitely programmable (and even more definitely don’t-even-programmable) and (b) Evolutionary traits are not necessarily part of a sentient AI.

Given that my map is not the territory, what do you think?

(Don’t comment about Zen and Kamikaze)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: