Would Asimov want us to teach robots to lie?

Should we place boundaries on artificial intelligence?

Should we place boundaries on artificial intelligence?

There has been a bit of discussion around the blogosphere these last couple of days about teaching robots to lie.  Obviously, this brings up a lot of issues surrounding our ethical relationship to the creation of artificial intelligence.  The recent experiments conducted by the Ecole Polytechnique Federale de Lausanne in Switzerland have shown that the robot intelligence that they created eventually taught themselves to lie in order to bring about a more advantageous position compared to their rivals.

The experiment revolved around potential ‘food’ sources, and showed that after a number of generations certain robots learned to hide food sources from others in order to keep them for themselves.  The question remains though, could we not develop systems of intelligence that prefer co-operative endeavour and the greater good over the needs of the individual?

Asimov’s rules for artificial intelligence are quite simple, and well known to many.  In 1942 in his short-story ‘Runaround’, Asimov came up with the Three Laws of Robotics:

1.  A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.  A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3.  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In Asmiov’s stories, robots eventually learn to bend these laws by weighing up the greater consequences of their actions.  But the question still remains, should we allow artificial intelligences the ability to lie?  What are the situations where this might end up being beneficial?  Indeed, a further ethical questions arises out of this discussion.  If we create artificial intelligences, should we limit their expression in any way so that they may be sentient in their own right and exist outside of our human paradigms?

The idea that we might create a rule-set which leads to dishonest behaviour does not seem to lend itself to an ethical mindset.  What do you think about such experiments?  Is the ability to lie an important aspect of sentient life?  Or is it rather an unfortunate byproduct of egotisical existence that we may not wish to pass on to created life-forms such as robots and other manifestations of artificial intelligence?

I’m really interested in hearing your thoughts on this one, because there seems to be a pivotal aspect of human existence at the core of this discussion.  Let us know what you think – should we allow robots the ability to lie?