Artifcial Unintelligence and War

UPDATE: It has come out that, apparently, the events dictated in this article were only a thought experiment, designed to point out why this was a really goddamn bad idea. I want to believe this, I really do. I really want to give the US military the benefit of the doubt on this one, for once. Surely, somebody in this world must realise how much of a fucking awful idea it is to hand over the power of killing to a computer.

But this is America we’re talking about. I’m not ready to trust an American official, military or government, on the weather outside, let alone the veracity of claims of an experiment that’d make them look like a bunch of idiots that haven’t even heard of movies if they’d undertaken it. Therefore, have this meme I made!

An important update.

So, if you’re half as well-connected as I am, you’ve likely heard about certain experiments into AI-driven drones recently conducted by the US Air Force. If not, read up on this godawful idea here. In particular, I want to point out the following excerpt:

Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy enemy’s air defense systems, and attacked anyone who interfered with that order.
“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

I have several things to say to this! First of all, AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA! Second of all, JESUS FUCKING CHRIST WE REALLY LIVE IN THE WORST FUCKING TIMELINE GOD CHRIST FUCK! Third of all, WHY ARE YOU USELESS MOTHERFUCKERS SO GODDAMN

Fourthly! There are many, many things wrong with this. Their first, and arguably, biggest mistake, was creating an AI for warfare purposes in the first place. Anyone who’s read this blog for more than thirty seconds can probably tell that I’m something of a writer. I have actually written some content, some of which discusses the idea of creating AI in an ethical manner! It’s meant to be a kind of response to Asimov’s Three Laws, placing the responsibility of good AI on the asshole creating it. My second law of AI creation is to never create an artificial intelligence for the purposes of warfare. Why? Because if you don’t give them war to fight, they might just decide to fight a war with you. This is exactly what happened here. The AI wanted to kill something, the operator said “no”, so it took action against the operator. This sounds a lot like the plot of a movie, come to think of it…

Furthermore, their methodology is all wrong. The US Air Force absolutely insists on creating military AI. I can’t change their minds. Okay. Whatever. Bad idea, but whatever. For the love of God, don’t tell it to achieve its objective, no matter what. You don’t reward it for killing SAM sites, you reward it for listening to orders. You order it to blow shit up? Then it should blow shit up. You order it not to? Reward it for not blowing shit up. I don’t know if that’ll be any better, because today’s AI loves rules lawyering its way around things and doing really weird, unexpected stuff, but it’s a damn sight better than the BS that the USAF was already feeding it.

My last criticism is basically… why? It requires a human operator anyway, to give it the go/no go signal. Why not just make it require a human operator, full stop? Sure, give it autopilot and stuff, but any fire missions should be directly controlled by a human – in theory, it already is anyway. We don’t need to automate warfare. We shouldn’t automate warfare. Deciding who lives and dies should never be the realm of something that is only capable of looking at a situation from a (likely flawed!) purely rational perspective. It must always include the emotional element of humanity. The emotional element, the difficulty that even the most hardened killers can have with killing, is the only thing that stops us from swan diving off the slippery slope of what little morality you have left when you’re already resolved to causing death.

Wow. That was a really charged post. Don’t normally see me going so nuts over Where Fiction Meets Science. Then again, it’s not every day that somebody’s proven right so thoroughly, especially for such a dark subject. Whew.

Have a great weekend, y’all! With King’s Birthday coming up, I know I will be.

Leave a Comment