Advances in AI, robotics and so-called autonomous technologies warrants special reflection as they can be oriented divergent from common good and pose ethical, societal and legal challenges. European Group on Ethics in Science and New Technologies (EGE) released a statement calling for the launch of a process that would pave the way towards a common, internationally recognised ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics, and ‘autonomous’ systems.
The EGE statement also investigates that the current ethical frameworks governing autonomous systems are very narrow which implies an often overly simplistic metrics in human affairs, suggesting a need for wider ethical framework, citing example from 3 applications.
- In 2016, moral controversy stirred up when the first person was killed in a car crash while driving in ‘autonomous’ mode. Moral debates are now often limited to discussion of exceptional use cases concerning so-called ‘Trolley Problem’ thought experiments. These cases are concerned with dilemmas of unavoidable accidents in which the only available choice is between options associated with the loss of human lives. Central questions in that framing mainly seem to concern how ‘autonomous’ systems should be programmed so that their deployment leads to morally acceptable outcomes in terms of lives lost respectively lives saved. This neglects broader questions such as ‘which design decisions were taken in the past that have led up to this moral predicament’, ‘which values should inform design’, ‘how should values in design be weighed in case of conflict, and by whom’.
- A large debate on autonomous weapon system takes place at the Conference on Certain Conventional Weapons in Geneva concerning the moral acceptability of ‘autonomous’ weapons and legal and moral responsibility for the deployment of these systems. However, attention also needs to be widened to questions such as - the nature and meaning of ‘meaningful human control’ over these systems and how to institute morally desirable forms of control.
- Without human intervention and control from outside, smart systems today conduct dialogues with customers in online call-centres; speech recognition interfaces and recommender systems of online platforms, e.g. Siri, Alexa and Cortana, make suggestions to users. Beyond the straightforward questions of data protection and privacy, a wider ethical framework is necessary that may ask whether people have a right to know whether they are dealing with a human being or with an AI artefact. Moreover, the question arises whether there should be limits to what AI systems can suggest to a person, based on a construction of the person's own conception of their identity.
While there is growing awareness of the need to address such questions, AI and robotics are currently advancing more rapidly than the process of finding answers to these thorny ethical, legal and societal questions. Current efforts represent a patchwork of disparate initiatives. There is a clear need for a collective, wide-ranging and inclusive process that would pave the way towards a common, internationally recognised ethical framework for the design, production, use and governance of AI, robots and ‘autonomous’ systems as recognized in the statement by EGE.