Alex Constantine - October 24, 2012
By Thomas L. McDonald
Patheos, October 2, 2012
“It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of.” –Ronald Arkin, Georgia Institute of Technology
What’s the man talking about? Autonomous drones: dumb metal programmed by fallible humans to wage a more merciful war. (There’s no such thing. Even Star Trek figured that out.)
There is a fundamentally anti-human belief that we can program an ethical machine that will coldly evaluate a situation and always make the right choice, unlike these icky meat sacks and their faulty programming. Humans, in this evaluation, are just bad code. Remove them from the loop, and all will be well.
Professor, let me introduce you to Lieutenant Colonel Stanislav Yevgrafovich Petrov, courtesy of Leah Libresco, who declined to annihilate the planet despite overwhelming (and false) evidence that this would have been the proper course of action. The computer would have launched. The human–tempered by human judgment and mercy–did not.
Obama’s drone war is already one of the most horrific, merciless, cold, inhuman war crimes of our time. Automation wouldn’t make it any better. Giving drones the power and authority to kill-–removing the human from the decision loop (something an officer once told me would never, ever happen)-–is madness to the nth degree.
Professor Arkin is an expert on the subject of autonomous lethality in robots. I would suggest that this is nothing for which we need experts. We need to say: “Okay, no. We don’t program robots with that capability, whatever short-sighted and spurious reasons you care to cook up to the contrary.” We would be better without any robots at all than with even one programmed with the capacity to kill. Robots aren’t actually necessary, and humanity can do just fine without them. You don’t need to fear a world without robots. You need to fear a world with people who feel robots can be more “ethical” than humans. You need to fear a world where morality has collapsed so completely that an elite feels the need to restore that morality through machines. A machine is incapable of being a moral agent.
Very efficiently written post. It will be useful to everyone who usess it, as well as myself. Keep doing what you are doing – looking forward to more posts.
I really appreciate this post. I have been looking everywhere for this! Thank goodness I found it on Bing. You have made my day! Thank you again!