Data Privacy

Print
Comment
Volume 91.7
Network Harms
Andy Z. Wang
B.S. 2022, San Jose State University; J.D. Candidate 2025, The University of Chicago Law School.

I would like to thank Professor Omri Ben-Shahar for his tremendous guidance and advice. Thank you to the editors and staff of the University of Chicago Law Review for their tireless editing support. A special thank you to Eric Haupt, Jack Brake, Karan Lala, Tanvi Antoo, Luke White, Jake Holland, Bethany Ao, Emilia Porubcin, Benjamin Wang, and Anastasia Shabalov for their invaluable insights and contributions along the way.

For data, the whole is greater than the sum of its parts. There may be millions of people with the same birthday. But how many also have a dog, a red car, and two kids? The more data is aggregated, the more identifying it becomes. Accordingly, the law has developed safe harbors for firms that take steps to prevent aggregation of the data they sell. A firm might, for instance, anonymize data by removing identifying information. But as computer scientists have shown, clever de-anonymization techniques enable motivated actors to unmask identities even if the data is anonymized. Data brokers collect, process, and sell data. Courts have traditionally calculated data brokering harms without considering the larger data ecosystem. This Comment suggests a broader conception is needed because the harm caused by one broker’s conduct depends on how other brokers behave. De-anonymization techniques, for instance, often cross-reference datasets to make guesses about missing data. A motivated actor can also buy datasets from multiple brokers to combine them. This Comment then offers a framework for courts to consider these “network harms” in the Federal Trade Commission’s (FTC) recent lawsuits against data brokers under its Section 5 authority to prevent unfair acts and practices.