Algorithmic Fair Use
Legal governance and regulation are becoming increasingly reliant on data collection and algorithmic data processing. In the area of copyright, online protection of digitized works is frequently mediated by algorithmic enforcement systems intended to purge illicit content and limit the liability of YouTube, Facebook, and other content platforms. But unauthorized content is not necessarily illicit content. Many unauthorized digital postings may claim legitimacy under statutory exceptions like the legal balancing standard known as fair use. Such exceptions exist to ameliorate the negative effects of copyright on public discourse, personal enrichment, and artistic creativity. Consequently, it may seem desirable to incorporate fair use metrics into copyright policing algorithms, both to protect against automated overdeterrence and to inform users of their compliance with copyright law. In this Essay, I examine the prospects for algorithmic mediation of copyright exceptions, warning that the design values embedded in algorithms will inevitably become embedded in public behavior and consciousness. Thus, algorithmic fair use carries with it the very real possibility of habituating new media participants to its own biases and so progressively altering the fair use standard it attempts to embody.