Governing the Black Box of Artificial Intelligence

26 Pages Posted: 27 Oct 2023 Last revised: 9 Nov 2023

See all articles by Marco Almada

Marco Almada

Universite du Luxembourg - Faculty of Law, Economics and Finance

Date Written: November 7, 2023

Abstract

Artificial intelligence (AI) is often described as a black box rendered opaque by the technical complexity of AI systems. Scholars and policymakers tend to see this opacity as a problem but often diverge on how to respond to this black-box problem. Some propose that AI opacity must be addressed through technical means, such as explanation models that portray how a system arrives at a decision. Others question the value of such mediated explanations, arguing that the solution to the black box problem re-quires mechanisms for disclosure of the inner workings of AI systems. In this chapter, I argue that no approach can work without drawing elements from the other. To do so, I first show how the black box metaphor is used as a conceptual model of AI in regulatory models, which leads to an emphasis on the technical sources of opacity connected to AI systems. Recent work on AI has developed various methods that can be used for the scientific scrutiny of these sources. Still, their use in non-scientific contexts is prone to various forms of manipulation. As an alternative, policy proposals often require more strict technical disclosure, for example, through the use of inherently interpretable models or the publication of the source code for AI software, but these might lead to reduced gains in understanding, if any. However, the current law on algorithmic transparency in the EU gives margin to a third interpretation: if the black box is perceived from a socio-technical perspective, disclosure requirements aimed at the development and use of AI systems may drastically reduce the possibilities for manipulation of explanations. Disclosure should therefore be seen as a pre-condition, not an alternative, to explanations and other technical approaches to the black box problem.

Keywords: AI Act, General Data Protection Regulation (GDPR), transparency, explainable AI, technical documentation

Suggested Citation

Marco Almada (Contact Author)

Universite du Luxembourg - Faculty of Law, Economics and Finance ( email )

4, rue Alphonse Weicker
Luxembourg, L-2721
Luxembourg

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
521
Abstract Views
1,812
Rank
107,884
PlumX Metrics