PDA

View Full Version : psychopathic robots?



TimH
07-22-2009, 03:01 PM
Terminator here we come...



Robo-Ethicists Want to Revamp Asimov's 3 Laws
Wired News (07/22/09) Ganapati, Priya


Researchers continue to work to make robots safer to be around humans, but some robot experts say the key is to stop making robots that lack ethics. "If you build artificial intelligence but don't think about its moral sense or create a conscious sense that feels regret for doing something wrong, then technically it is a psychopath," says Josh Hall, author of "Beyond AI: Creating the Conscience of a Machine." For years, science fiction author Issac Asimov's Three Laws of Robotics have been used for a robot's behavior guidelines. The three laws are a robot may not injure a human being or allow one to come to harm, a robot must obey orders given by human beings, and a robot must protect its own existence. However, as robots are increasingly incorporated into the real world, some believe that Asimov's laws are too simplistic. Robo-ethicists want to develop a set of guidelines that outline how to punish a robot, decide who regulates robots, and even create a "legal machine language" to help police the next generation of intelligent automated devices. Willow Garage research scientist Leila Katayama says even if robots are not completely autonomous, there needs to be a clear set of rules governing responsibility for their actions. The next generation of robots will be able to make independent decisions and work relatively unsupervised, which means rules must be established that cover how humans should interact with robots and how robots should behave, robo-ethicists say.
View Full Article (http://www.wired.com/gadgetlab/2009/07/robo-ethics/) | Return to Headlines (http://woodenboat.com/forum/newthread.php?do=newthread&f=5#top)

seanz
07-22-2009, 04:12 PM
Moral Behaviour setting : Off
Crush, Kill, Destroy setting : On

BarnacleGrim
07-22-2009, 04:16 PM
Dogs are completely autonomous, but in Swedish law they never have any culpability for their actions, only the owners do. Why not do the same with robots?

Tom Galyen
07-22-2009, 04:58 PM
I am reading P. W. Singer's book "Wired for War." It is about the how the use of robotics in war is changing war. We have already broken the first of Asimov's 3 Laws by using robots to kill. I realize that what we are using now do not measure up to Asimov's idea of a true robot, but as Singer points out in his book the path is already laid, we've come to far to turn back. What he and others call "The Singularity" could come before 2025. Whether this means a world of heaven on earth or a world such as Terminator remains to be seen.

This is an interesting read and I recommend it. I am reading it on a Kindle app on my iPhone which seems apropos considering the subject.

2MeterTroll
07-22-2009, 07:08 PM
a bot is a machine the owner is culpable.
killing one is the same as killing a toaster or wrecking a car.

when i see one with feelings and actual brains i will reconsider.

johnw
07-22-2009, 07:22 PM
The first robot to kill a person was a welding robot in an automobile manufacturing plant. Of course, the robot didn't know it was killing someone. So the first trick is to get them to recognize a person. Unfortunately, the defense department has the big research budget, so probably the first robots able to recognize they are killing someone will have been developed for the purpose of killing people -- members of the enemy army.

This is a much bigger problem than finding a way to make robots feel guilty. Mind, I think the world needs more sensitive robots with as rich an emotional life as possible, I just don't think the Defense Department will finance the research, so the psychotic killer robots will come first.

2MeterTroll
07-22-2009, 08:01 PM
we already have robots with rich emotional lives trained to kill humans. the tricks going to be shutting them off.

Bob (oh, THAT Bob)
07-22-2009, 10:10 PM
The first robot to kill a person was a welding robot in an automobile manufacturing plant. Of course, the robot didn't know it was killing someone. So the first trick is to get them to recognize a person. Unfortunately, the defense department has the big research budget, so probably the first robots able to recognize they are killing someone will have been developed for the purpose of killing people -- members of the enemy army.

This is a much bigger problem than finding a way to make robots feel guilty. Mind, I think the world needs more sensitive robots with as rich an emotional life as possible, I just don't think the Defense Department will finance the research, so the psychotic killer robots will come first.

100% true. What do you think the Darpa Challenge races, and now the Darpa Urban Challenge, were about? And those are just the tip of the iceberg.

johnw
07-22-2009, 10:44 PM
Chilling.

Vince Brennan
07-22-2009, 11:11 PM
Hell, I thought this was about the Cambridge Police department.

johnw
07-22-2009, 11:22 PM
Naw, robots wouldn't have that emotional response to some old guy asking for their badge number. They'd just shoot him.

skuthorp
07-23-2009, 07:07 AM
Posted TimH: "Robo-ethicists want to develop a set of guidelines that outline how to punish a robot, decide who regulates robots, and even create a "legal machine language".

How to punish a robot? So already it's the robot that's culpable, so now we build a robot prison or do we just turn them off? Legal machine language? Oh yes, can you see a team of lawyers writing the operating program? And 'Robo-ethicists'? Good grief!