In the Hub4NGI D2.1, a significant issue regarding the responsible AI has been raised which is about whether and how can an AI algorithm be accountable for its actions.
The issue of Algorithmic Accountability was also raised in Net Futures debate "Legal, ethical and social issues in a software defined world" and the participants agreed that there is a responsibility gap for the AI systems.
Transparency is considered to be a key aspect of algorithmic accountability as depicted in the Net Future 2017 debate session. And it is widely agreed that the algorithms need to be able to explain their decisions.
Furthermore, the biasness and discrimination might also be a big issue and it is possible that the algorithm may discriminate people because of their gender, race, ethnic or religious associations. For example, there might be the case when an algorithm filters Curriculum Vitae in recruitment situations. So the transparency should enable every concerned citizen understand how their personnel data is being used. Because of the inputs and training data, biasness can be built into AI algorithm. Therefore the training and input process should be transparent in such a way that citizens can trust on the AI system.
The Algorithmic Accountability is to trying to find the answers for the following questions:
- How to trust a black box algorithm?
- How to opt out of algorithm use?
- To what extent do algorithms violate privacy?
- If an algorithm causes harm, who or what is responsible?
- Transparency: how can AI systems explain their decisions?
- How can bias be eliminated from AI systems?
Research and discussion involving multidisciplinary teams from the legal, sociological and technical domains is needed to provide the answers for ethical and legal questions.