EU regulations on algorithmic decision-making and a "right to explanation"
Citation: Bryce Goodman, Seth Flaxman (2016/06/28) EU regulations on algorithmic decision-making and a "right to explanation". 2016 ICML Workshop on Human Interpretability in Machine Learning (RSS)
Internet Archive Scholar (search for fulltext): EU regulations on algorithmic decision-making and a "right to explanation"
Download: http://arxiv.org/abs/1606.08813
Tagged:
Summary
Authors "summarize the potential impact that the European Union’s new General Data Protection Regulation will have on the routine use of machine learning algorithms" adopted 14 April 2016 and taking effect April 2018.
Article 11 prohibits decisions based solely on automation which affect individuals unless authorized by law and with safeguards, including the right to obtain human intervention. Profiling that results in discrimination on the basis of special categories of personal data is prohibited (note could be interpreted narrowly, only prohibiting use of variables such as race, or broadly, if non-sensitive variables in effect produce same discriminatory results - a choice between ineffective and infeasible regulation).
"Articles 12-14 specify that data subjects have the right to access information collected about them, and also requires data processors to ensure data subjects are notified about the data collected [...] GDPR recitals state that a data subject has the right to “an explanation of the decision reached after [algorithmic] assessment.” This requirement prompts the question: what does it mean, and what is required, to explain an algorithm’s decision?"
Burrell (2016) identified 3 barriers to transparency:
- Intentional concealment
- Inadequate technical literacy, making access to code insufficient
- Mismatch between "mathematical optimization in high-dimensionality characteristic of machine learning" and human interpretation
"[I]t stands to reason that an algorithm can only be explained if the the trained model can be articulated and understood by a human. It is reasonable to suppose that any adequate explanation would, at a minimum, provide an account of how input features relate to predictions"
"One promising avenue of research concerns developing algorithms to quantify the degree of influence of input variables on outputs, given black-box access to a trained prediction algorithm"
Concluding paragraph: "Above all else, the GDPR is a vital acknowledgement that, when algorithms are deployed in society, few decisions if any are purely “technical”. Rather, the ethical design of algorithms requires coordination between technical and philosophical resources of the highest caliber. A start has been made, but there is far to go. And, with less than two years until the GDPR takes effect, the clock is ticking."
Theoretical and Practical Relevance
As is typical of writing on algorithmic regulation and transparency, the freedom to publicly audit, including the need to modify and share both program and data, seems severely underestimated. Of course machine learning is hard to interpret, but an alternate conclusion: all barriers to introspection and trial by all manner of experts must be removed.
Blog summary: https://blog.acolyer.org/2017/01/31/european-union-regulations-on-algorithmic-decision-making-and-a-right-to-explanation/