View Single Post
  #19 (permalink)  
Old 07-25-2018, 05:54 PM
eschaider's Avatar
eschaider eschaider is offline
CC Member
Visit my Photo Gallery

 
Join Date: Feb 2006
Location: Gilroy, CA
Cobra Make, Engine: SPF 2291, Whipple Blown & Injected 4V ModMotor
Posts: 2,741
Not Ranked     
Default

The issue you are identifying, Dan is just one variation of the decision tree dilemma that must be resolved by the on board ethics logic. The challenge you have accurately identified has only on possible solution and that is to have all ethics programs employ a basic set of cardinal rules and values to insure all robots come to the same conclusion, the same way at the same time.

A giant in the Science Fiction space, Isaac Asimov, evolved the most basic of these cardinal rules with what has become known as Isaac Asimov's "Three Laws of Robotics". Those three laws as conceived by Asimov are (in order)

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While simple at first glance the three laws are profoundly complete, all-encompassing and represent an excellent foundation on which to build the remaining ethical and operational routines.

Asimov conceived these rules/guidelines almost a 80 years ago before we actually had any real robots to work with. The guy was not only an award winning author with engaging publications, he was also quite gifted across a wide range of disciplines.


Ed
__________________


Help them do what they would have done if they had known what they could do.
Reply With Quote