正在加载图片...
him.A human soldier,watching the scene remotely via a fibre-optic link, decides whether or not to issue a warning (through a loudspeaker)or press the fire button.The robot sentry,the Samson Remote Weapon Station,could function without human intervention,says David Ishai of Rafael,its Israeli manufacturer,based in Haifa.But,he says,switching to automatic mode would be a bad idea-and illegal to boot. Traditional rules of engagement stipulate that a human must decide if a weapon is to be fired.But this restriction is starting to come under pressure. Already,defence planners are considering whether a drone aircraft should be able to fire a weapon based on its own analysis.In 2009 the authors of a US Air Force report suggested that humans will increasingly operate not "in the loop"but"on the loop",monitoring armed robots rather than fully controlling them.Better artificial intelligence will eventually allow robots to"make lethal combat decisions",they wrote,provided legal and ethical issues can be resolved. A report on the matter issued by Britain's Ministry of Defence last year argued that if a drone's control system takes appropriate account of the law on armed conflicts (basically military necessity,humanity,proportionality and the ability to distinguish between military targets and civilians),then an autonomous strike could meet legal norms.Testing and certifying such a system would be difficult.But the authors concluded that"as technology matures...policymakers will need to be aware of the potential legal issues and take advice at a very early stage of any new system's procurement cycle." Pressure will grow for armies to automate their robots if only so machines can shoot before being shot,says Juirgen Altmann of the Technical University of Dortmund,in Germany,and a founder of the International Committee for Robot Arms Control,an advocacy group.Some robot weapons already operate without human operators to save precious seconds.An incoming anti-ship missile detected even a dozen miles away can be safely shot down only by a robot,says Frank Biemans,head of sensing technologies for the Goalkeeper automatic ship-defence cannons made by Thales Nederland. Admittedly,that involves a machine destroying another machine.But as human operators struggle to assimilate the information collected by robotic sensors,decision-making by robots seems likely to increase.This might be a good thing,says Ronald Arkin,a roboticist at the Georgia Institute of Technology,who is developing"ethics software"for armed robots.By crunching data from drone sensors and military databases,it might be possible to predict,for example,that a strike from a missile could damage a nearby religious building.Clever software might be used to call off attacks as well as initiate them. In the air,on land and at sea,military robots are proliferating.But the revolution in military robotics does have an Achilles heel,notes Emmanuelhim. A human soldier, watching the scene remotely via a fibre-optic link, decides whether or not to issue a warning (through a loudspeaker) or press the fire button. The robot sentry, the Samson Remote Weapon Station, could function without human intervention, says David Ishai of Rafael, its Israeli manufacturer, based in Haifa. But, he says, switching to automatic mode would be a bad idea—and illegal to boot. Traditional rules of engagement stipulate that a human must decide if a weapon is to be fired. But this restriction is starting to come under pressure. Already, defence planners are considering whether a drone aircraft should be able to fire a weapon based on its own analysis. In 2009 the authors of a US Air Force report suggested that humans will increasingly operate not “in the loop” but “on the loop”, monitoring armed robots rather than fully controlling them. Better artificial intelligence will eventually allow robots to “make lethal combat decisions”, they wrote, provided legal and ethical issues can be resolved. A report on the matter issued by Britain’s Ministry of Defence last year argued that if a drone’s control system takes appropriate account of the law on armed conflicts (basically military necessity, humanity, proportionality and the ability to distinguish between military targets and civilians), then an autonomous strike could meet legal norms. Testing and certifying such a system would be difficult. But the authors concluded that “as technology matures…policymakers will need to be aware of the potential legal issues and take advice at a very early stage of any new system's procurement cycle.” Pressure will grow for armies to automate their robots if only so machines can shoot before being shot, says Jürgen Altmann of the Technical University of Dortmund, in Germany, and a founder of the International Committee for Robot Arms Control, an advocacy group. Some robot weapons already operate without human operators to save precious seconds. An incoming anti-ship missile detected even a dozen miles away can be safely shot down only by a robot, says Frank Biemans, head of sensing technologies for the Goalkeeper automatic ship-defence cannons made by Thales Nederland. Admittedly, that involves a machine destroying another machine. But as human operators struggle to assimilate the information collected by robotic sensors, decision-making by robots seems likely to increase. This might be a good thing, says Ronald Arkin, a roboticist at the Georgia Institute of Technology, who is developing “ethics software” for armed robots. By crunching data from drone sensors and military databases, it might be possible to predict, for example, that a strike from a missile could damage a nearby religious building. Clever software might be used to call off attacks as well as initiate them. In the air, on land and at sea, military robots are proliferating. But the revolution in military robotics does have an Achilles heel, notes Emmanuel
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有