Autonomous systems and the ethical issues of sentient technology

The Royal Academy of Engineering has released a report that considers some of the possible issues that might arise from an ever increasing presence of autonomous systems within our society.  Entitled Autonomous Systems: Social, Legal and Ethical Issues the short paper marks a call for public discussion surrounding emerging technologies.  The report sets out a number of definitions that mark the spectrum from completely human-controlled systems to adaptive autonomous systems that can learn and make decisions completely independently.  The work being done here must be commended, as it explicitly asks whether we should expect artificial intelligence to act in a way similar to humans, and the many implications that might arise from such an outcome.

For example, how much of our personal choice and freedom are we willing to give up to such systems?  What happens if these systems make a mistake or fail, who becomes responsible for the consequences of their actions?  Beyond the more general philosophical questions that are raised, the report also highlights the need for strong quality-control regulations and the very real need to adapt our legal structure to incorporate these new technologies – which in a judicial system based on agency and legal entities will no doubt prove a very tricky task indeed.

I definitely recommend that everyone go and check out the report, there is certainly a lot of thought provoking stuff in its relatively short length.  Personally, I think that the most insightful idea that it contains is that we tend to work on a lot of assumptions about these kinds of technologies – causing us to move forwards without a proper ethical and socially aware basis.

Beyond this, there are some aspects of it that are quite frankly terrifying.  Although only mentioned in passing, the idea that such autonomous systems might begin to be used for policing or other forms of social control is quite disturbing.  I do realise that many of the negative aspects of such elements of society come from poor decisions and judgment from individuals – police officers reacting with unnecessary force for example – but taking away this element of subjectivity for something that is automated will surely have just as many issues if not more.  We can also be sure that such autonomous technologies will be used for military purposes, and indeed already are to a certain extent.

Where does utility end and sentience begin?

Where does utility end and sentience begin?

The final question has to be though, where does utility end and sentience begin?  These technologies are designed to make our lives more safe and convenient, but there are far greater ethical questions at hand when we begin to create self-aware and subjective forms of intelligence.  Because after-all, almost every science-fiction novel that includes robots has highlighted the deep ethical and philosophical dilemmas that will arise as a result.

Yet it seems that many projects are forging forwards at great pace, with their commercial purpose the core motivation.  I would strongly suggest that we are about to cross one of the most profound social boundaries without much warning – if we create true sentience, how are we going to relate to it?  Religion has been asking questions like this in relation to our possible creator since time immemorial, and now science is about to enter the same arena in a very prominent way – it is going to bring about a paradigm shift the likes of which I don’t believe has ever been seen before.


Leave a reply