Robot ethics Morals and the machine As robots grow more autonomous,society needs to develop rules to manage them Jun 2nd 2012 From the print edition IN THE classic science-fiction film "2001",the ship's computer,HAL,faces a dilemma.His instructions require him both to fulfil the ship's mission(investigating an artefact near Jupiter)and to keep the mission's true purpose secret from the ship's crew.To resolve the contradiction,he tries to kill the crew. As robots become more autonomous,the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world.Society needs to find ways to ensure that they are better equipped to make moral judgments than HAL was. A bestiary of robots Military technology,unsurprisingly,is at the forefront of the march towards self-determining machines(see Technology Quarterly).Its evolution is producing an extraordinary variety of species.The Sand Flea can leap through a window or onto a roof,filming all the while.It then rolls along on wheels until it needs to jump again. RiSE,a six-legged robo-cockroach,can climb walls.LS3,a dog-like robot,trots behind a human over rough terrain,carrying up to 180kg of supplies.SUGV,a briefcase-sized robot,can identify a man in a crowd and follow him.There is a flying surveillance drone the weight of a wedding ring,and one that carries 2.7 tonnes of bombs. Robots are spreading in the civilian world,too,from the flight deck to the operating theatre (see article).Passenger aircraft have long been able to land themselves Driverless trains are commonplace.Volvo's new V40 hatchback essentially drives itself in heavy traffic.It can brake when it senses an imminent collision,as can Ford's B-Max minivan.Fully self-driving vehicles are being tested around the world. Google's driverless cars have clocked up more than 250,000 miles in America,and Nevada has become the first state to regulate such trials on public roads.In Barcelona a few days ago,Volvo demonstrated a platoon of autonomous cars on a motorway. As they become smarter and more widespread,autonomous machines are bound to end up making life-or-death decisions in unpredictable situations,thus assuming-or at least appearing to assume-moral agency.Weapons systems currently have human operators"in the loop",but as they grow more sophisticated,it will be possible to shift to"on the loop"operation,with machines carrying out orders autonomously. As that happens,they will be presented with ethical dilemmas.Should a drone fire on a house where a target is known to be hiding,which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants?Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic?Such questions have led to the emergence of the field of"machine ethics",which aims to give machines the ability to make such choices appropriately-in other words,to tell right from wrong
Robot ethics Morals and the machine As robots grow more autonomous, society needs to develop rules to manage them Jun 2nd 2012 |From the print edition IN THE classic science-fiction film “2001”, the ship’s computer, HAL, faces a dilemma. His instructions require him both to fulfil the ship’s mission (investigating an artefact near Jupiter) and to keep the mission’s true purpose secret from the ship’s crew. To resolve the contradiction, he tries to kill the crew. As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. Society needs to find ways to ensure that they are better equipped to make moral judgments than HAL was. A bestiary of robots Military technology, unsurprisingly, is at the forefront of the march towards self-determining machines (see Technology Quarterly). Its evolution is producing an extraordinary variety of species. The Sand Flea can leap through a window or onto a roof, filming all the while. It then rolls along on wheels until it needs to jump again. RiSE, a six-legged robo-cockroach, can climb walls. LS3, a dog-like robot, trots behind a human over rough terrain, carrying up to 180kg of supplies. SUGV, a briefcase-sized robot, can identify a man in a crowd and follow him. There is a flying surveillance drone the weight of a wedding ring, and one that carries 2.7 tonnes of bombs. Robots are spreading in the civilian world, too, from the flight deck to the operating theatre (see article). Passenger aircraft have long been able to land themselves. Driverless trains are commonplace. Volvo's new V40 hatchback essentially drives itself in heavy traffic. It can brake when it senses an imminent collision, as can Ford's B-Max minivan. Fully self-driving vehicles are being tested around the world. Google's driverless cars have clocked up more than 250,000 miles in America, and Nevada has become the first state to regulate such trials on public roads. In Barcelona a few days ago, Volvo demonstrated a platoon of autonomous cars on a motorway. As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously. As that happens, they will be presented with ethical dilemmas. Should a drone fire on a house where a target is known to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks causing a panic? Such questions have led to the emergence of the field of “machine ethics”, which aims to give machines the ability to make such choices appropriately—in other words, to tell right from wrong
One way of dealing with these difficult questions is to avoid them altogether,by banning autonomous battlefield robots and requiring cars to have the full attention of a human driver at all times.Campaign groups such as the International Committee for Robot Arms Control have been formed in opposition to the growing use of drones. But autonomous robots could do much more good than harm.Robot soldiers would not commit rape,burn down a village in anger or become erratic decision-makers amid the stress of combat.Driverless cars are very likely to be safer than ordinary vehicles,as autopilots have made planes safer.Sebastian Thrun,a pioneer in the field, reckons driverless cars could save 1m lives a year. Instead,society needs to develop ways of dealing with the ethics of robotics-and get going fast.In America states have been scrambling to pass laws covering driverless cars,which have been operating in a legal grey area as the technology runs ahead of legislation.It is clear that rules of the road are required in this difficult area,and not just for robots with wheels. The best-known set of guidelines for robo-ethics are the"three laws of robotics" coined by Isaac Asimov,a science-fiction writer,in 1942.The laws require robots to protect humans,obey orders and preserve themselves,in that order.Unfortunately,the laws are of little use in the real world.Battlefield robots would be required to violate the first law.And Asimov's robot stories are fun precisely because they highlight the unexpected complications that arise when robots try to follow his apparently sensible rules.Regulating the development and use of autonomous robots will require a rather more elaborate framework.Progress is needed in three areas in particular. Three laws for the laws of robotics First,laws are needed to determine whether the designer,the programmer,the manufacturer or the operator is at fault if an autonomous drone strike goes wrong or a driverless car has an accident.In order to allocate responsibility,autonomous systems must keep detailed logs so that they can explain the reasoning behind their decisions when necessary.This has implications for system design:it may,for instance,rule out the use of artificial neural networks,decision-making systems that learn from example rather than obeying predefined rules. Second,where ethical systems are embedded into robots,the judgments they make need to be ones that seem right to most people.The techniques of experimental philosophy,which studies how people respond to ethical dilemmas,should be able to help.Last,and most important,more collaboration is required between engineers, ethicists,lawyers and policymakers,all of whom would draw up very different types of rules if they were left to their own devices.Both ethicists and engineers stand to benefit from working together:ethicists may gain a greater understanding of their field by trying to teach ethics to machines,and engineers need to reassure society that they are not taking any ethical short-cuts. Technology has driven mankind's progress,but each new advance has posed troubling new questions.Autonomous machines are no different.The sooner the questions of
One way of dealing with these difficult questions is to avoid them altogether, by banning autonomous battlefield robots and requiring cars to have the full attention of a human driver at all times. Campaign groups such as the International Committee for Robot Arms Control have been formed in opposition to the growing use of drones. But autonomous robots could do much more good than harm. Robot soldiers would not commit rape, burn down a village in anger or become erratic decision-makers amid the stress of combat. Driverless cars are very likely to be safer than ordinary vehicles, as autopilots have made planes safer. Sebastian Thrun, a pioneer in the field, reckons driverless cars could save 1m lives a year. Instead, society needs to develop ways of dealing with the ethics of robotics—and get going fast. In America states have been scrambling to pass laws covering driverless cars, which have been operating in a legal grey area as the technology runs ahead of legislation. It is clear that rules of the road are required in this difficult area, and not just for robots with wheels. The best-known set of guidelines for robo-ethics are the “three laws of robotics” coined by Isaac Asimov, a science-fiction writer, in 1942. The laws require robots to protect humans, obey orders and preserve themselves, in that order. Unfortunately, the laws are of little use in the real world. Battlefield robots would be required to violate the first law. And Asimov's robot stories are fun precisely because they highlight the unexpected complications that arise when robots try to follow his apparently sensible rules. Regulating the development and use of autonomous robots will require a rather more elaborate framework. Progress is needed in three areas in particular. Three laws for the laws of robotics First, laws are needed to determine whether the designer, the programmer, the manufacturer or the operator is at fault if an autonomous drone strike goes wrong or a driverless car has an accident. In order to allocate responsibility, autonomous systems must keep detailed logs so that they can explain the reasoning behind their decisions when necessary. This has implications for system design: it may, for instance, rule out the use of artificial neural networks, decision-making systems that learn from example rather than obeying predefined rules. Second, where ethical systems are embedded into robots, the judgments they make need to be ones that seem right to most people. The techniques of experimental philosophy, which studies how people respond to ethical dilemmas, should be able to help. Last, and most important, more collaboration is required between engineers, ethicists, lawyers and policymakers, all of whom would draw up very different types of rules if they were left to their own devices. Both ethicists and engineers stand to benefit from working together: ethicists may gain a greater understanding of their field by trying to teach ethics to machines, and engineers need to reassure society that they are not taking any ethical short-cuts. Technology has driven mankind's progress, but each new advance has posed troubling new questions. Autonomous machines are no different. The sooner the questions of
moral agency they raise are answered,the easier it will be for mankind to enjoy the benefits that they will undoubtedly bring. Questions about Content: 1.Does the writer acknowledge the opposing viewpoint,and does she address Questions about Organization 2.What strategy of expository development is used most in this essay? Questions about Style 3.Comment on the quality of sentence variety in this essay Opinions for Essay Writing 4. Paraphrase 5
moral agency they raise are answered, the easier it will be for mankind to enjoy the benefits that they will undoubtedly bring. Questions about Content: 1. Does the writer acknowledge the opposing viewpoint, and does she address Questions about Organization 2. What strategy of expository development is used most in this essay? Questions about Style 3. Comment on the quality of sentence variety in this essay. Opinions for Essay Writing 4. Paraphrase 5