With AI becoming more ubuiquitous, there is a need for means of redress where decisions made by machines are difficult to understand, or unethical or even illegal. A recent article from MIT Technology Review discusses the need for AI systems to explain decisions without revealing secrets or stifling innovation.
The article discusses a possible solution from some Harvard researchers which is built upon a definition of explanation as "...we generally mean the reasons or justifications for that particular outcome, rather than a description of the decision-making process in general".