Towards a shared Ethical Framework for Artificial Intelligence

Mar 26 2018

Current efforts to find answers to the ethical, societal and legal challenges posed by AI and autonomous technologies and to orient them for the common good represent a patchwork of disparate initiatives. This underlines the need for a collective, wide-ranging and inclusive process of reflection and dialogue. A statement by the European Group on Ethics calls for the launch of such a process that would pave the way towards a common, internationally recognised ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics, and ‘autonomous’ systems.

Following is a brief overview of various disparate standalone initiatives on this subject followed by the statement from EGE calling for a collective initiative-

  1. Some of the most prominent initiatives towards the formulation of ethical principles regarding AI and ‘autonomous’ systems have stemmed from industry, practitioners and professional associations, such as the IEEE's policy paper on Ethically Aligned Design, ITU's Global Summit AI for Good, and the ACM's Conference on AI, Ethics, and Society. Within the private sector, companies such as IBM, Microsoft and Google's DeepMind have established their own ethic codes on AI and joined forces in creating broad initiatives such as the Partnership on AI or OpenAI, which bring together industry, non-profit and academic organisations.
  2. One of the leading initiatives calling for a responsible development of AI has been launched by the Future of Life Institute and has culminated in the creation of the ‘Asilomar AI Principles’. This list of 23 fundamental principles to guide AI research and application has been signed by hundreds of stakeholders, with signatories representing predominantly scientists, AI researchers and industry. A similar participatory process has been launched though the Forum on the Socially Responsible Development of Artificial Intelligence held by the University of Montreal, in reaction to which a first draft of a potential ‘Declaration for a Responsible Development of Artificial Intelligence’ has been developed. It is publicly accessible on an online platform where all sectors of society are invited to comment on the text.
  3. A worldwide debate on the military use of AI has been initiated by the UN and the meetings for the Convention on Certain Conventional Weapons (CCW, Geneva), where a majority of the High Contracting Parties endorsed the so-called principle of ‘meaningful human control for LAWS’ stating that ‘Autonomous Weapons Systems that require no meaningful human control should be prohibited’ (General Assembly UN, 2016). The UN has also established a special research institute in The Hague to study the governance of Robotics and AI (UNICRI).
  4. At the national level initiatives are uneven, with some countries prioritising the development of rules for robots and artificial intelligence and going so far as to adopt legislation (e.g. to regulate self-driving cars on public roads), whereas other countries are yet to deal with the matter. This lack of a harmonised European approach has prompted the European Parliament to call for a range of measures to prepare for the regulation of advanced robotics including the development of a guiding ethical framework for the design, production and use of robots.

Against this backdrop, the EGE draws attention to the risks inherent to uncoordinated, unbalanced approaches in the regulation of AI and ‘autonomous’ technologies. Regulatory patchworks may give rise to ‘ethics shopping’, resulting in the relocation of AI development and exploitation by regions with lower ethical standards. Allowing the debate to be dominated by certain regions, disciplines, demographics or industry actors risks excluding a wider set of societal interests and perspectives. Current discussions sometimes also lack an overview of ‘autonomous’ technologies that are likely to be studied, developed and implemented in the next decade, leaving a blind spot when it comes to regulatory foresight.

The EGE calls for a wide-ranging and systematic public engagement and deliberation on the ethics of AI, robotics and ‘autonomous’ technology and on the set of values that societies choose to embed in the development and governance of these technologies, proposing a set of ethical principles to start with. It should integrate a wide, inclusive and far-reaching societal debate, drawing upon the input of diverse perspectives, where those with different expertise and values can be heard.

The EGE calls upon the European Commission to investigate which existing legal instruments are available to effectively deal with the problems discussed in this statement and whether new governance and regulatory instruments are required.

The EGE calls for the launch of a process that paves the way towards a common, internationally recognised ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics, and ‘autonomous’ systems. This process, in which the EGE stands ready to play its part, should provide a platform for joining together the diverse global initiatives outlined above.

The EGE in its statement urges the European Union to place itself at the vanguard of such a process and calls upon the European Commission to launch and support its implementation.

Facebook Twitter Linkedin