Artificial intelligence, robotics and ‘autonomous’ systems can bring prosperity, contribute to well-being and help to achieve European moral ideals and socio-economic goals if designed and deployed wisely. Thus to safeguard and steer their development in the right direction, a range of increasingly urgent and complex moral questions have emerged as summarized here. European Group on Ethics in Science and New Technologies (EGE) on their part released a statement calling for the launch of a process that would pave the way towards a common, internationally recognised ethical and legal framework in answering those moral questions and to guide future design, production, use and governance of artificial intelligence, robotics, and ‘autonomous’ systems.
As a first step, statement proposes a set of fundamental ethical principles and democratic prerequisites, based on the values laid down in the EU Treaties and the EU Charter of Fundamental Rights, which may serve as a basis for the establishment of global standards, legislative action, and ethical guidelines for AI and autonomous systems. The proposed principles are summarized as below-
- Human dignity: The principle of human dignity, understood as the recognition of the inherent human state of being worthy of respect, must not be violated by ‘autonomous’ technologies. This means, for instance, that there are limits to determinations and classifications concerning persons, made on the basis of algorithms and ‘autonomous’ systems, especially when those affected by them are not informed about them. It also requires that we as humans be aware of whether and when we are interacting with a machine or another human being, and that we reserve the right to vest certain tasks to the human or the machine.
- Autonomy: The principle of autonomy implies the freedom of the human being. All ‘autonomous’ technologies must, hence, honour the human ability to choose whether, when and how to delegate decisions and actions to them.
- Responsibility: The principle of responsibility must be fundamental to AI research and application. ‘Autonomous’ systems should only be developed and used in ways that serve the global social and environmental good, as determined by outcomes of deliberative democratic processes. As the potential misuse of ‘autonomous’ technologies poses a major challenge, risk awareness and a precautionary approach are crucial.
- Justice, equity, and solidarity: AI should contribute to global justice and equal access to the benefits and advantages that its related technologies bring. Discriminatory biases in data sets used to train and run AI systems should be prevented or detected, reported and neutralised at the earliest stage possible.
- Democracy: Key decisions on the regulation of AI development and application should be the result of democratic debate and public engagement. A spirit of global cooperation and public dialogue on the issue will ensure that they are taken in an inclusive, informed, and farsighted manner. The right to receive education or access information on new technologies and their ethical implications will facilitate that everyone understands risks and opportunities and is empowered to participate in decisional processes that crucially shape our future.
- Rule of law and accountability: The whole range of legal challenges arising in the field of AI and autonomous systems should be addressed with timely investment in the development of robust solutions that provide a fair and clear allocation of responsibilities and efficient mechanisms of binding law. In this regard, governments and international organisations ought to increase their efforts in clarifying with whom liabilities lie for damages caused by undesired behaviour of ‘autonomous’ systems. Moreover, effective harm mitigation systems should be in place.
- Security, safety, bodily and mental integrity: Safety and security of ‘autonomous’ systems materialises in three forms: (1) external safety for their environment and users, (2) reliability and internal robustness, e.g. against hacking, and (3) emotional safety with respect to human-machine interaction. All dimensions of safety must be taken into account by AI developers and strictly tested before release. Special attention should also be paid to potential dual use and weaponisation of AI, e.g. in cybersecurity, finance, infrastructure and armed conflict.
- Data protection and privacy: In an age of ubiquitous and massive collection of data through digital communication technologies, the right to protection of personal information and the right to respect for privacy are crucially challenged. Both physical AI robots as part of the Internet of Things, as well as AI softbots that operate via the World Wide Web must comply with data protection regulations and not collect and spread data or be run on sets of data for whose use and dissemination no informed consent has been given.
- Sustainability: AI technology must be in line with the human responsibility to ensure the basic preconditions for life on our planet, continued prospering for mankind and preservation of a good environment for future generations. Strategies to prevent future technologies from detrimentally affecting human life and nature are to be based on policies that ensure the priority of environmental protection and sustainability.
The above ethical considerations can be used to shape the world of future AI technologies and should be construed as stimulus and opportunities for innovation, and not impediments and barriers.