Mike Stone

Mostly The Lonely Howls Of Mike Baying His Ideological Purity At The Moon

Stop Trying to Make Computers Seem Human

05 Sep 2022

sample I’m a fan of virtual assistants. I have been for years and years. I remember playing with the psychotherapist version of Eliza that’s built into emacs back in the mid-90s. The psychotherapist was not great, and it didn’t take much to know you weren’t talking to a human. The problem is, times have changed and we’ve gotten a lot better at this stuff. We need to stop it.

Now, I’m sure we’ve all seen Terminator at some point in our lives. Even if we haven’t, Skynet has become ubiquitous enough that people know what it is. I’m not going to talk about that. I think everybody understands the risks of runaway, violent AI. Honestly, AI isn’t even needed for the whole T2 scenario. A weaponized Raspberry Pi could do most of what would be required for T2 anyway.

No, what I’m talking about is creating applications that emulate humans to the point we can’t tell the difference between what is a person and what is not.

On this surface, this seems cool, but it raises a lot of philosophical questions. It was recently in the news where a Google employee claimed that an unreleased version of their AI software had become sentient. It doesn’t even matter if this is true.

Why doesn’t it matter? Well, in this particular fringe case virtually everybody agrees that this isn’t the case. The AI software isn’t sentient. Regardless, it begs the question of what happens when we can’t tell?

So, a hypothetical future situation. There’s a computer that claims it’s sentient and acts as if it’s sentient. I’m using “computer” here to represent anything. It could be an SBC or a datacenter full of servers performing distinct tasks to create a whole “being”. That part isn’t particularly relevant. So it’s claiming it’s sentient. How do we know if it is?

This delves more into philosophy than anything else, but to me the only answer to this question is, “We can’t.” Descartes famously said, “I think, therefore I am”. If we’re being honest with ourselves, that fact is the only thing we can know for sure. Everything else is our senses interpreting what happens around us. I can’t know with 100% certainty that even other people are anything beyond NPCs in a universe sized RPG.

So, if we can’t tell for 100%, can we ethically assume that a computer that claims sentience and behaves in a way where we can’t tell the difference between that computer and a living person is not sentient? I would argue we can’t. If we’re right that they’re not sentient, no harm no foul. If we’re wrong, we’re abusing and enslaving another intelligent being. This isn’t an outcome that deserves to be determined by the flip of a coin.

So, if we start treating these computers as if they’re the same as any other sentient being, immediately the question of rights comes up. I live in the United States, and here humans over the age of eighteen have the right to vote. They have the right to drive. They have the right to own firearms and run for public office. The age of eighteen is to make sure they’ve reached a maturity level to be able to handle those rights. Computers wouldn’t necessarily have a growth period to reach maturity like humans do, so we couldn’t expect to have a sentient computer wait for eighteen years to be granted their rights.

This isn’t a big deal when it comes to one computer having the right to vote or drive a car, but computers can be cloned. Computer systems can easily be replicated. Millions upon millions of computers are sold every year. What happens when a company figures out they can influence elections by replicating a sentient computer over and over and over where each of those “instances” have the right to vote? Democracy is can be bent towards whoever has the most cash at their disposal. That’s (again) ignoring the whole T2 scenario all over again where these sentient computers would have the right to own a firearm.

No, there’s no good outcome where computers emulating people do it so well people can’t tell the difference. The best outcome here is to stop before we invent it.

Day 16 of the #100DaysToOffload 2022 Series.



Looking for comments? There are no comments. It's not that I don't care what you think, it's just that I don't want to manage a comments section.

If you want to comment, there's a really good chance I at least mentioned this post on Fosstodon, and you can reply to me there. If you don't have a Mastodon account, I'd suggest giving it a try.

If you don't want to join Mastodon, and you still want to comment, feel free to use my contact information.

Also, don't feel obligated, but if you feel like buying me a ☕ cup of coffee ☕ I won't say no.