Discussion (1L): Race and Technology
Download as PDF
Course Description
People often tend to think of technology as value neutral, as essentially objective tools that can be used for good or evil, particularly when questions of race and racial justice are involved. But the technologies we develop and deploy are frequently shaped by historical prejudices, biases, and inequalities and thus may be no less biased and racist than the underlying society in which they exist. In this discussion group, we will consider whether and how racial and other biases are present in a wide range of technologies, such as "risk assessment" algorithms for bail, predictive policing, and other decisions in the criminal justice system; facial recognition systems; surveillance tools; algorithms for medical diagnosis and treatment decisions; online housing ads that result in "digital redlining;" programs that determine entitlement to credit or public benefits and/or purport to detect fraud by recipients; algorithms used in recruiting and hiring; digital divide access gaps; and more. Building on these various case studies, we will seek to articulate a framework for recognizing both explicit and subtle anti-black and other biases in tech and understanding them in the broader context of racism and inequality in our society. Finally, we will discuss how these problems might be addressed, including by regulators, legislators, and courts as well as by significant changes in mindset and practical engagement by technology developers and educators. Elements used in grading: Full attendance, reading of assigned materials, and active participation. Class meets 4:30 PM-6:00 PM on Sept. 29, Oct. 13, Oct. 27, Nov. 10.
Grading Basis
L03 - Law Mandatory Pass/Restricted credit/Fail
Min
1
Max
1
Course Repeatable for Degree Credit?
No
Course Component
Seminar
Enrollment Optional?
No