Ethical algorithm design should guide technology regulation

Society expects people to respect certain social values when they are entrusted with making important decisions. They should make judgments fairly. They should respect the privacy of the people whose information they are privy to. They should be transparent about their deliberative process.


But increasingly, algorithms and the automation of certain processes are being incorporated into important decision-making pipelines. Human resources departments now routinely use statistical models trained via machine learning to guide hiring and compensation decisions. Lenders increasingly use algorithms to estimate credit risk. And a number of state and local governments now use machine learning to inform bail and parole decisions, and to guide police deployments. Society must continue to demand that important decisions be fair, private, and transparent even as they become increasingly automated.

“Society must continue to demand that important decisions be fair, private, and transparent even as they become increasingly automated.”

Nearly every week, a new report of algorithmic misbehavior emerges. Recent examples include an algorithm for targeting medical interventions that systematically led to inferior outcomes for black patients,a resume-screening tool that explicitly discounted resumes containing the word “women” (as in “women’s chess club captain”), and a set of supposedly anonymized MRI scans that could be reverse-engineered to match to patient faces and names.


In none of the previous cases were the root causes some malintent or obvious negligence on the part of the programmers and scientists who built and deployed these models. Rather, algorithmic bias was an unanticipated consequence of following the standard methodology of machine learning: specifying some objective (usually a proxy for accuracy or profit) and algorithmically searching for the model that maximizes that objective using colossal amounts of data. This methodology produces exceedingly accurate models—as measured by the narrow objective the designer chooses—but will often have unintended and undesirable side effects. The necessary solution is twofold: a way to systematically discover “bad behavior” by algorithms before it can cause harm at scale, and a rigorous methodology to correct it.


Many algorithmic behaviors that we might consider “antisocial” can be detected via appropriate auditing—for example, explicitly probing the behavior of consumer-facing services such as Google search results or Facebook advertising, and quantitatively measuring outcomes like gender discrimination in a controlled experiment. But to date, such audits have been conducted primarily in an ad-hoc, one-off manner, usually by academics or journalists, and often in violation of the terms of service of the companies they are auditing.

“[M]ore systematic, ongoing, and legal ways of auditing algorithms are needed.”

We propose that more systematic, ongoing, and legal ways of auditing algorithms are needed. Regulating algorithms is different and more complicated than regulating human decision-making. It should be based on what we have come to call ethical algorithm design, which is now being conducted by a community of hundreds of researchers. Ethical algorithm design begins with a precise understanding of what kinds of behaviors we want algorithms to avoid (so that we know what to audit for), and proceeds to design and deploy algorithms that avoid those behaviors (so that auditing does not simply become a game of whack-a-mole).


Let us discuss two examples. The first comes from the field of algorithmic privacy and has already started to make the transition from academic research to real technology used in large-scale deployments. The second comes from the field of algorithmic fairness, which is in a nascent stage (perhaps 15 years behind algorithmic privacy), but is well-positioned to make fast progress.


DATA PRIVACY: ADVANCING TO A BETTER SOLUTION

Corporate and institutional data privacy practices unfortunately rely on heuristic and largely discredited notions of “anonymizing” or “de-identifying” private data. The basic hope is that, by removing names, social security numbers, or other unique identifiers from sensitive datasets, they will be safe for wider release (for instance, to the medical research community). The fundamental flaw with such notions is that they assume the dataset in question is the only one in the world, and are thus highly vulnerable to “de-anonymization” attacks that combine multiple sources of data.


The first high-profile example of such an attack was conducted in the mid-1990s by Latanya Sweeney, who combined allegedly anonymized medical records released by the state of Massachusetts with publicly available voter registration data to uniquely identify the medical record of then-governor William Weld—which she mailed to his office for dramatic effect. As in this example, anonymization techniques often fail because of the wealth of hard-to-anticipate extra information that is out there in the world, ready to be cross-referenced by a clever attacker.

“[A]nonymization techniques often fail because of the wealth of hard-to-anticipate extra information that is out there in the world, ready to be cross-referenced by a clever attacker.”

The breakthrough that turned the field of data privacy into a rigorous science occurred in 2006, when a team of mathematical computer scientists introduced the concept of differential privacy. What distinguished differential privacy from previous approaches is that it specified a precise yet extremely general definition of the term “privacy”: specifically, that no outside observer (regardless of what extra information they might have) should be able to determine better than random guessing whether any particular individual’s data was used to construct a data relea