As for the ethical qualms involving dehumanizing war even further than it already is, would you make the same argument about bombers? They never see the targets they hit, or the collateral damage they create. Would you make the same argument about tanks? How about artillery? What makes it so different that a human pulls the trigger rather than a human ordering the robot to pull the trigger? The robot doesn't have a consciousness: It's a tool, a further precaution to avoid human casualties on side A while fighting side B.
Too much superstition. Too much fear mongering. Tsk.
Not at all. While bomber pilots didn't have to see what they did on the ground, they're still humans who have to live with the fact that they might have killed dozens, if not hundreds of people over the course of a war. UAV pilots, for instance, have been well-recorded to be suffering from PTSD even though they are hundreds of miles away in a controlled room (source:
Source 1 ,
Source 2 ). This article sums up a lot of the psychological effects from a broad number of wars:
VVA source. Even if the act of bombing itself didn't scar the crew, they are still exposing themselves to combat when they fly missions, be it enemy fighters, anti-air defenses, and the like. It's still a human who has to be put at risk to do the action, and sometimes those people don't come home.
And tank crews absolutely suffered for their experience, and they still do. Be it an enemy vehicle that was destroyed (which isn't always a tank, and a lot of vehicles the crew can absolutely see the occupants who died), infantry that was gunned down by the machine-guns, or seeing comrades get killed, just because you're in an enclosed armoured vehicle doesn't mean it takes away from the psychological effects. Artillery placements can still be made with the same argument as bomber crews, since while they may not be exposed to immediate danger, you still have to live with the fact you probably killed people. In earlier wars or modern conventional wars that have two more closely matched sides, artillery is still at risk for a counter-battery operation or an airstrike, or if worse comes to worse, having their positions overrun, which has absolutely happened before. There's a reason that artillerymen are trained to fight as infantry, because it's a real threat.
What makes it different if a human is the one making the choices and committing to the actions compared to sending a machine to autonomously fight battles without human intervention? Making judgement calls.
A machine gets destroyed, it just costs money to replace. Nobody actually has to see what it did, unless they review data or footage when it returns to base, well after the fact. A human's involved, and either that person's at risk or is witnessing what happens. Either way, there's consequences, physical or psychological, to the soldier and because of that people are aware of the consequences of warfare. Ever since Vietnam when war was first televised to the public and not just pre-packaged propaganda reels, the public at home finally started to understand how horrific war is and support evaporated for the war, even though the US was winning. Ever since then, the military takes great pains to minimize civilian casualties and destruction of public infrastructure. The days of carpet bombing or shelling cities for hours on end are over, and now it's mostly precise missile strikes to minimize collateral damage. Everything the military does now is documented; hell, if you wanted to, you could go Youtube videos of combat footage and actually see people die. The public gets outraged when there's a huge loss of life, either from military personnel or civilians, and that's
a good thing.
The problem with drones is there's little accountability with them; the US military is using them in countries that they aren't even declaring war on to strike targets, and the brass thinks they're fantastic because there's no pilot to risk (except for his mind; see previous sources), and airstrikes now are still incredibly popular because you don't need boots on the ground. They're effective, to a point, but the problem is when you have a minimal risk to your own forces you don't think of the consequences, and unlike in the previous Afghanistan and Iraq wars where there were boots on the ground and the military had to try to help and work with the local population and actually see first hand what the consequences of their actions are, the current round of bombing is fast becoming a fast and easy way of waging a war without really needing to own up to it. At any point, any of the coalition could pack up their planes, pat themselves on the back, and not have to deal with the problem any longer. You can see how somebody could get rather detached doing that. If we get to the point where there's weapon systems that operate themselves and target and engage the enemy without any operator input, then it's literally a machine running amok with no supervision. It'll follow it's programming fine, sure, and if they perform excellently then it'll get to the point where whenever a nation declares wars, they just dump autonomous robots to fight for them without anything more than support bases to repair and rearm them for human interaction.
At that point, what's to make people step back and ask themselves if deployment is worth it or not? It becomes a cost in dollars instead of in life, and if you can afford it, why wouldn't you solve all your problems with these robots? Sure, you can read about how one fired a hellfire missile to engage a group of soldiers and how it killed 20 innocent people on a report filed later on from someone who went over the combat footage, but unless there's someone there to make that judgement call of whether or not to pull the trigger, it's all kinds of fucked up.
To be clear, I'm all for using unmanned vehicles to use in war zones because they're still controlled by soldiers who have to make the judgement call when operating those machines. I'm all for anything that can help save lives and minimize risk to soldiers, but I absolutely do not want to see us going down a road where we simply remove the soldiers from the equation. There's been many, many instances in war where pilots, submarine crews, and other people who are far removed from the battlefield made serious judgement calls of whether or not to pull the trigger, and in some cases, it stopped things from escalating, like in the Cuban Missile Crisis where a Soviet commander was ordered to fire his nuclear payload but he refused to follow the order, preventing the Cold War from becoming a nuclear winter. Will a machine have the same ability to make a judgement call, or will it do what it's programmed to do the moment a line is crossed? Will it know the difference between a hostile soldier and a hunter who was getting food for his family, or a kid with a toy gun? These are serious concerns that need to be considered.