
Those who downplay the importance of roboethics may say, “Let the marketplace decide.” Those who care about roboethics may counter with, “It would be morally wrong to program an artificial moral agent to ignore food security.” coli bacteria?Īdding food security duties to the robotic chef probably means adding substantial technology and cost. But should the artificial moral agent who prepares the food be programmed to detect E. An artificial moral agent won’t be physically able to chase down a customer who accidentally leaves his or her wallet on the counter. One way to crystallize the questions posed by roboethics is to move from viewing robots simply as having artificial intelligence to seeing them as “artificial moral agents.” The challenge becomes anticipating and deciding how you want an artificial moral agent to behave in a given situation.Īs robotics becomes more commonplace in manufacturing and service sectors to achieve efficiency, the number of ethical issues to decide for artificial moral agents will increase exponentially.Ĭonsider a robotized fast food restaurant.

Roboethics – a term coined as recently as 2002 – may differ notably from “regular” ethics in that it must straddle physical practicality and what we humans might call “doing the right thing.” That’s why this emerging field is multidisciplinary, with input from diverse experts in computer science, sociology, industrial design, theology, cognitive science and, of course, ethics. While Asimov’s laws work pretty well in guiding the plot in a Terminator movie, they may be inadequate to steer the driverless car that faces a choice of killing the passenger or the pedestrian.

We’re not talking here about HAL 9000, the rogue computer in 2001: A Space Odyssey. Even as we grapple with human ethics, philosophers are beginning to worry about robot ethics.
