Document Type
Article
Publication Date
2021
Abstract
As algorithms have become more complex, privacy and ethics scholars have urged artificial intelligence (AI) transparency for purposes of ensuring safety and preventing discrimination. International statutes are increasingly mandating that algorithmic decision-making be explained to affected individuals when such decisions impact an individual's legal rights, and U.S. scholars continue to call for transparency in automated decision-making.
Unfortunately, modern Al technology does not function like traditional, human-designed algorithms. Due to the unavailability of alternative intellectual property (IP) protections and their often dynamically inscrutable status, algorithms created by Al are often protected under trade-secrecy status, which prohibits sharing the details of a trade secret, lest destroy the trade secret. Furthermore, dynamic inscrutability, the true "black box," makes these algorithms secret by definition: even their creators cannot easily explain how they work. When mandated by statute, it may be tremendously difficult, expensive, and undesirable from an IP perspective to require organizations to explain their Al algorithms. Despite this challenge, it may still be possible to satisfy safety and fairness goals by instead focusing on Al system and process disclosure.
This Article first explains how Al differs from historically defined software and computer code. This Article then explores the dominant scholarship calling for opening the black box and the reciprocal pushback from organizations likely to rely on trade secret protection-a natural fit for AI's dynamically inscrutable algorithms. Finally, using a simplified information fiduciary framework, I propose an alternative for promoting disclosure while balancing organizational interests via public Al system disclosure and black-box testing.
Recommended Citation
Charlotte A. Tschider, Beyond the "Black Box", 98 Denv. L. Rev. 683 (2021).