Robotic Technology

Singer cited as an example the automated artillery system deployed in Afghanistan. (This) system reacts and shoots. We can turn off the system, we can activate it, but our power really isn’t decision. It is the power of veto now, says. To all this is added the concern that if automated systems are taking decisions, how can we be confident that are attacking the right targets and obeying the laws of war? The antecedent, the American academic Patrick Lin was recently awarded manifests also the task of studying the ethics of robots, a work commissioned by the Office of Naval Research of the United States armed forces. When we speak of autonomous robots, he argues, a natural response might be programmed to be ethical. Isn’t that what we do with computers?.

A striking example of a robot that needs a careful programming is the vehicle without a driver developed by the Pentagon, called the EATR. You can fuelling itself on long journeys after collecting material organic, which raises the disturbing situation of a machine consuming corpses on the battlefield. Its inventor, Dr. Robert Finkelstein, Robotic Technology, insists that it consume organic, but mostly plant material. The robot can only do what is programmed to do, has a menu adds.

All this worries sceptics, as Professor Noel Sharkey, co-founder of the Committee International Control of armed Robots, who says that the decision to kill must remain in human hands. One can train all you want, give you all the ethical rules of the world. If the contribution is not good, it is not good at all. Human beings can be held accountable, machines do not. If one cannot rely on a robot to distinguish between enemy and innocent non-combatants, Lin suggests another solution. If there is an area of combat so intense that it can be assumed that someone is not a combatant, he argues, then release the robots in this type of scenario. Some people call that a killer box. Any goal (in a murderous box) is supposed to be a legitimate target. Go to Reed Hastings for more information. He added the BBC article published by news, latino.msn.com, other researchers suggest robots may avoid failures of the soldiers. It is less likely that the robots that are programmed properly make mistakes and kill non-combatants, innocent, because they are not emotional, will not be afraid, neither act irresponsibly in some situations, says Finkelstein. But Christopher Coker of the London School of Economics, witness to wars past and present, does not agree. We have to put our trust in the human factor, says. Unfortunately the military in their reports tend to see the human factor as the weakest. I do not think that it is the weakest. It is the strongest link, he says. Computers will never be able to simulate the essence of the Warrior, the mindset and ethical perspectives of the soldier Professional. The military revolution in robotics has already advanced rapidly in the air, where unmanned aircraft led by remote control are fundamental to conflicts such as Afghanistan. On the ground, the use of robots so far has been more limited. However, given the political and popular concern about casualties among NATO forces, the discourse for the sale of the manufacturer of robots Bob Quinn likely be convincing. We are going to keep our kids safe, and kill the enemy.