Procurement as Policy: Administrative Process for Machine Learning
At every level of government, officials contract for technical systems that employ machine learning—systems that perform tasks without using explicit instructions, relying on patterns and inference instead. These systems frequently displace discretion previously exercised by policymakers or individual front-end government employees with an opaque logic that bears no resemblance to the reasoning processes of agency personnel. However, because agencies acquire these systems through government procurement processes, they, and the public, have little input into—or even knowledge about—their design, or how well that design aligns with public goals and values.
In this talk I explore specific ways that design decisions inherent in machine-learning systems are substantive policy decisions, and how the procurement processes, which today dominates their adoption, limits their full consideration. Specifically, these embedded policies receive little or no agency or outside expertise beyond that provided by the vendor: no public participation, no reasoned deliberation, and no factual record. Design decisions are left to private third-party developers: Government responsibility for policymaking is abdicated. I argue that, when policy decisions are made through system design processes suitable for substantive administrative determinations should be used: processes that demand reasoned deliberation reflecting both technocratic concerns about the informed application of expertise, and democratic concerns about political accountability. Finally, I sketch ways that agencies might garner relevant technical expertise and overcome problems of system opacity, satisfying administrative law’s technocratic demand for reasoned expert deliberation; and institutional and engineering design solutions to the challenge of policy making opacity, offering both process paradigms to ensure the “political visibility” required for public input and political oversight, and proposing the importance of using “contestable design”—design that exposes value-laden features and parameters, and provides for iterative human involvement in system evolution and deployment. Together, these institutional and design approaches further both administrative law’s technocratic, and democratic, mandates.
Bio: Deirdre K. Mulligan is a Professor in the School of Information at UC Berkeley, a faculty Director of the Berkeley Center for Law & Technology, a co-organizer of the Algorithmic Fairness & Opacity Working Group, an affiliated faculty on the Hewlett funded Berkeley Center for Long-Term Cybersecurity, and a faculty advisor to the Center for Technology, Society & Policy. Mulligan’s research explores legal and technical means of protecting values such as privacy, freedom of expression, and fairness in emerging technical systems. Her book, Privacy on the Ground: Driving Corporate Behavior in the United States and Europe, a study of privacy practices in large corporations in five countries, conducted with UC Berkeley Law Prof. Kenneth Bamberger was recently published by MIT Press.
Part of the CDAC Winter 2021 Distinguished Speaker Series:
Bias Correction: Solutions for Socially Responsible Data Science
Security, privacy and bias in the context of machine learning are often treated as binary issues, where an algorithm is either biased or fair, ethical or unjust. In reality, there is a tradeoff between using technology and opening up new privacy and security risks. Researchers are developing innovative tools that navigate these tradeoffs by applying advances in machine learning to societal issues without exacerbating bias or endangering privacy and security. The CDAC Winter 2021 Distinguished Speaker Series will host interdisciplinary researchers and thinkers exploring methods and applications that protect user privacy, prevent malicious use, and avoid deepening societal inequities — while diving into the human values and decisions that underpin these approaches.
[ Ссылка ]
Ещё видео!