I don't have subscription to this magazine. If any of you do, I encourage you to leave comments to counter their propaganda and pseudo-scientific argument.

This article exemplifies how the scientific establishment has been a tool of (and massively funded by) the military-industrial-complex. The real reason why killer robots are desirable for the ruling elites, is because they then don't need to rely on the loyalty and total control of human troops to carry out wars of conquest, and suppression of popular rebellion. The conscience of human troops is always a potential liability, no matter the amount of indoctrination they've gone through. It's also one of the driving reasons for the pursuit of man-machine hybrids, mind-to-mind communications, hive mind, and various other forms of mind control.


8 November 2017

Letting robots kill without human supervision could save lives

Calls to ban killer robots ignore the fact that human soldiers can make lethal mistakes. If driverless cars will save lives, perhaps armed machines can as well
NEXT week, a meeting at UN headquarters in Geneva will discuss autonomous armed robots. Unlike existing military drones, which are controlled remotely, these new machines would identify and attack targets without human intervention. Groups including the Campaign to Stop Killer Robots hope the meeting will lead to an international ban.
But while fiction is littered with cautionary tales of what happens when you put guns in the cold, metallic hands of a machine, the situation may not be as simple as “human good, robots bad”.
To understand why, we should look at what people are saying about the ethics of driverless cars, which advocates see as a way of reducing accidents. If your life is safer in the hands of a robot car than a human driver, might the same apply for military hardware?
Clearly, replacing a human combatant with a robot one is safer for that individual, but armed robots could also reduce civilian casualties. For example, a squad that has to clear a building must make a split-second decision about whether the occupant of a room is an armed insurgent or an innocent civilian – any hesitation could get them killed. A robot can wait for confirmation, when the enemy starts firing.
The same principle applies to air strikes. An autonomous system can make several runs over a target to confirm it is really an enemy outpost, but a pilot can risk only one pass. In both cases the only downside is the loss of machines due to excessive caution, not casualties.
Human rights groups now see the use of precision-guided weapons as ......