Asimov’s Laws of Robotics, and why AI may not abide by them

A great thought piece by Hans A. Gunno, an Artificial Intelligence (AI) Engineer studying at the University of Southampton.

First, a refresher on the three laws of robotics. Isaac Asimov (1920–1992) was, in addition to being a professor of biochemistry, considered one of the “Big Three” science fiction writers of his time. In the mid-1900s, he postulated 3 laws which, if abided by, would prevent a robot uprising. They are as follows:

Law 1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Law 2: A robot must obey orders given to it by human beings except where such orders would conflict with the first law.

Law 3: A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

Gunno also indicates that programming generally starts with a 0, and not a 1, so he shares a 0th law as stated by Computerphile, referring to the collective rather than the individual, which is:

Law 0: A robot may not harm humanity or, through inaction, allow humanity to come to harm.

From here, Gunno indicates the problems that come when we try to create one language or code to work across multiple cultures, dialects, and regions. There is a need for localization of language, code, and in this instance…ethics embedded in machine learning instances.

SOURCE: Towards Data Science

Leave a Reply

%d bloggers like this: