Document Type

Article

Publication Date

2021

Abstract

Proponents of artificial intelligence (“AI”) transparency have carefully illustrated the many ways in which transparency may be beneficial to prevent safety and unfairness issues, to promote innovation, and to effectively provide recovery or support due process in lawsuits. However, impediments to transparency goals, described as opacity, or the “black-box” nature of AI, present significant issues for promoting these goals.

An undertheorized perspective on opacity is legal opacity, where competitive, and often discretionary legal choices, coupled with regulatory barriers create opacity. Although legal opacity does not specifically affect AI only, the combination of technical opacity in AI systems with legal opacity amounts to a nearly insurmountable barrier to transparency goals. Types of legal opacity, including trade secrecy status, contractual provisions that promote confidentiality and data ownership restrictions, and privacy law independently and cumulatively make the black box substantially opaquer.

The degree to which legal opacity should be limited or disincentivized depends on the specific sector and transparency goals of specific AI technologies, technologies which may dramatically affect people’s lives or may simply be introduced for convenience. This Response proposes a contextual approach to transparency: Legal opacity may be limited in situations where the individual or patient benefits, when data sharing and technology disclosure can be incentivized, or in a protected state when transparency and explanation are necessary.

Share

COinS