It is important to have an ethical framework in place for artificially intelligent and autonomous machines. In the Net Future 2017 debate on Legal, ethical and social issues in a software defined world, there was a disagreement among the participants regarding the ethical considerations of AI. It was advocated by some participants that the ethical framework and legal standards should be designed and built into the system design. However this raises a question that what happens to the learning algorithm and how it can be designed with such ethics in mind?
The counter argument is that the application of technology should be the subject of legality and ethics rather than the technology itself. It is argued that the use of technology is to be accountable against legal and ethical questions rather than the technology. Trying to put ethics into technology design might constrain it.
It is however still a question to be answered that how ethics can be designed into AI technology and where these ethical and legal questions lie.
The question of ethics for autonomous machines is not new and lessons can be learned from the societal impacts of robotics, a well-cited example from 1942 being Isaac Asmiov’s "Three Laws of Robotics".
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
The concept of robotics as tools is discussed in EPSRC and AHRC Robotics Retreat held in September 2010, supporting the notion that it is the application that the tool is put to that needs ethical consideration and legislation governing what is permissible.
The Hub4NGI D2.1 deliverable recommends that multi-disciplinary research and discussion are undertaken to provide answers to legal and ethical questions surrounding AI and its applications:
- How should legislation be brought to effect on AI systems?
- What are the most appropriate regulations?
- Are AI systems ever likely to be legal entities?
- How can an AI system be incentivised to be compliant with the law? It does not understand the notions of penalties for non-compliance.
- Should ethics be designed into AI technology, or should ethics apply to the applications
of AI technology? - How can commitments be acquired from creators of AI technology to issue patches for
safety critical flaws over the long term?