rightcm.blogg.se

Robotize fast food 2015
Robotize fast food 2015







Those who downplay the importance of roboethics may say, “Let the marketplace decide.” Those who care about roboethics may counter with, “It would be morally wrong to program an artificial moral agent to ignore food security.” coli bacteria?Īdding food security duties to the robotic chef probably means adding substantial technology and cost. But should the artificial moral agent who prepares the food be programmed to detect E. An artificial moral agent won’t be physically able to chase down a customer who accidentally leaves his or her wallet on the counter. One way to crystallize the questions posed by roboethics is to move from viewing robots simply as having artificial intelligence to seeing them as “artificial moral agents.” The challenge becomes anticipating and deciding how you want an artificial moral agent to behave in a given situation.Īs robotics becomes more commonplace in manufacturing and service sectors to achieve efficiency, the number of ethical issues to decide for artificial moral agents will increase exponentially.Ĭonsider a robotized fast food restaurant.

robotize fast food 2015

Roboethics – a term coined as recently as 2002 – may differ notably from “regular” ethics in that it must straddle physical practicality and what we humans might call “doing the right thing.” That’s why this emerging field is multidisciplinary, with input from diverse experts in computer science, sociology, industrial design, theology, cognitive science and, of course, ethics. While Asimov’s laws work pretty well in guiding the plot in a Terminator movie, they may be inadequate to steer the driverless car that faces a choice of killing the passenger or the pedestrian.

  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
  • A robot must obey orders given to it by human beings, except when such orders conflict with the First Law.
  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • As far back as 1942, Isaac Asimov proposed three ethical laws for robots: The ethics of robots seems like an appropriate topic for science fiction writers. With a human driver, we expect decisions based on instinctive response, for better or worse. We’re talking instead about how robots – for example in driverless cars – are programmed. Does the program call for the robot to drive into a telephone pole or an oncoming car to avoid a pedestrian in a crosswalk or does it take out the pedestrian to save the passenger’s life? Not a trivial decision.

    robotize fast food 2015

    We’re not talking here about HAL 9000, the rogue computer in 2001: A Space Odyssey. Even as we grapple with human ethics, philosophers are beginning to worry about robot ethics.









    Robotize fast food 2015