Some have said that we are living in the age of algorithms. Netflix uses an algorithm to recommend videos. Facebook has an algorithm that displays the posts and advertisements you’re most likely to interact with. Google’s algorithm serves different search results to different people, based on prior Web traffic. Amazon’s algorithm makes recommendations for things you might want to buy. Match’s algorithm identifies people with whom you are likely to be romantically compatible. We have smart thermostats that use algorithms to learn user’s climate-control preferences. My 11-year-old son uses an algorithm to solve Rubik’s cubes in under a minute.
An algorithm is really nothing more than a mathematical model or formula that accepts inputs, applies calculations, and provides output. Cathy O’Neil, the author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, introduces the idea of an algorithm as being similar to making a family dinner, taking into account the various likes, dislikes, and quantities her family needs. Algorithms can be extremely useful in automating and understanding large, complex sets of information—for example, searching for a document on your hard disk. But they can also be harmful, as several articles about YouTube have noted, describing how their algorithm tends to lead viewers down rabbit holes of conspiracy theories, propaganda, and salacious content.
O’Neil is a mathematician turned data scientist. Though her career, she has seen the potential for applying data-analysis methods to understand both financial markets and the justice system. However, she has also seen how these methods can lead to poor outcomes. While the media have recently discussed how the use of algorithms in social media and search can create information bubbles, the use of algorithms may have even broader and more costly implications. O’Neil describes such harmful algorithms as weapons of math destruction, or WMDs. While mathematical models can be wonderfully descriptive and predictive tools for understanding the world around us, their misuse can distort the truth in ways that can be brutal—and often impossible for individuals to overcome. Thus, they become WMDs.
A key factor in the usefulness of algorithms and their use in forecasting outcomes is that they continually improve. Data scientists monitor inputs to and responses from their models, learning from them as they work. An algorithm itself is not intelligent. Algorithms require human intervention to correct their fundamental errors. However, a key attribute of WMDs is that humans do not, in fact, correct them. So rather than their informing decisions or leading to new discoveries, they tend to create feedback loops that have terrible consequences. A good illustration of this are flash crashes, in which automated trading platforms that use algorithms intake flawed information and discount the values of equities, leading other algorithms to reduce their investments in those equities and creating mutually reinforcing cycles of negativity that result in system crashes. In April of 2013, both the S&P 500 and Dow Jones experienced a flash crash that wiped out well over $100 billion of value. Because these errors involve money and the results are visible to the public, markets tend to identify and correct such problems quickly.
Title:Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
Author: Cathy O’Neil
Formats: Paperbook, Kindle, and Audible
Publisher: Broadway Books
Published: September 5, 2017
Why Do We Build Weapons of Math Destruction?
Numbers don’t lie. Or at least that’s what we have been told to believe from an early age. Writing, speaking, and more qualitative ways of describing the world can contain biases. Numbers are not affected by emotions, opinions, prejudices, or all the other stuff that creates ambiguity and confusion in our world. We take for granted that two plus two will always equal four, no matter what the education level, race, or relative wealth of the person performing the calculation.
However, it is not possible to relate many behaviors to a single variable or even a small number of variables that are specific to a person. As a result, the makers of WMDs rely on proxies as indicators of unrelated variables. As O’Neil describes in her book, it is the use of proxies that may be at the root of WMDs. These proxies are ultimately a misuse of—or more likely a misattribution of—factors that influence outcomes—for example, a person’s credit score being a proxy for measuring the work ethic or trustworthiness of a prospective employee introduces a fundamental error into the decision to hire someone.
In an early example, O’Neil describes the IMPACT system that the Washington DC school district uses. The intention of this system was to weed out underperforming teachers. This scoring system was the primary determinant of whether the district terminated or retained a teacher. In the example O’Neil provides, a new fifth-grade teacher named Sarah Wysocki was receiving excellent evaluations from administrators, students, and parents. However, the IMPACT evaluation identified her as an ineffective teacher, leading to her termination, along with 205 other teachers.
A key input to the IMPACT evaluation appeared to be the performance of students in standardized testing and the perception of students’ continuously improving performance. It seems the hypothesis was: if a cohort of students ends a school year with a relatively high average grade and high standardized scores, the manifestation of lower scores and lower grades would indicate that the current teacher was performing at a lower level.
However, this hypothesis ignores several factors that really impact student performance. A student may have an off year, during which parents lose their employment or get a divorce. Perhaps an illness is affecting a loved one, distracting the child’s attention from schoolwork. The potential for bad actors also exists, as seems to have been the case in Wysocki’s dismissal.
Subsequent investigations revealed that there were suggestive erasures on students’ standardized-test submissions, which seemed to indicate that some more experienced teachers—recognizing the incentives of the IMPACT evaluations and the impact on their job—modified students’ answers to improve their scores—as well as their own performance ratings and bonuses. Unfortunately, there was no way to account for this in the IMPACT evaluation, and there was no recourse for teachers to appeal their dismissal.
Credit Scores as Proxies
WMDs ultimately create feedback loops that punish individuals for their circumstances, not their behavior. But in attempting to predict behavior, they often apply proxies in the absence of real information. The use of credit scores to predict a borrower’s inclination to repay debts makes sense. However, ever-increasing numbers of employers are using credit scores as an indicator of an employee’s trustworthiness or level of personal responsibility. Similarly, many automotive insurers use credit scores in determining whether to give drivers coverage and determine what premiums they’ll pay.
However, people’s repayment histories may have very little to do with their ability to perform a job or whether they’re safe drivers. This approach is particularly troubling because, even within the domain of loan repayment, a borrower’s credit history does not necessarily reveal trustworthiness or responsibility. Two thirds of all personal bankruptcies in the US are not the result of credit-card debt, mortgage debt, or even student-loan debt. The leading cause of personal bankruptcy is actually healthcare debt. The high cost of medical treatment in the US causes many to reduce their debt repayments, exhaust their savings, and ultimately, declare bankruptcy. This can destroy their credit score and reduce their ability to obtain or keep gainful employment once they’re physically able, which perpetuates their inability to repay their debt.
Criminal Justice and Recidivism
Another example of the use of proxies in making decisions is the theory of the broken-windows method of neighborhood policing that its proponents apply to criminal -justice issues. O’Neil asserts that this is, in fact, a misuse of Kelling and Wilson’s original research findings from 1982. Law-enforcement officials—and, more likely, politicians—apply the broken-windows method, assuming that property upkeep is a proxy for a neighborhood’s level of crime. This thinking has led to a zero-tolerance mentality, in which law-enforcement officials assume that minor crimes lead to serious crimes. Based on the assumption that the disrepair of buildings in poorer neighborhoods indicates an atmosphere of lawlessness, law enforcement waged a crusade against nuisance crimes in the name of societal order. As a result, they arrested higher numbers of minors, prosecuting them harshly for nonviolent offenses such as jumping subway turnstiles.
The impacts of applying these assumptions across the nation are tragic. O’Neil’s assessment of crime-prediction models such as PredPol is detailed and thoughtful. In brief, these predictive models attempt to deploy law enforcement more efficiently, by having officers patrol neighborhoods where crimes happen. The trouble with this approach is that there are generally two types of crimes: violent crimes such as homicide or arson, which occur rarely, and nuisance crimes such as minor drug possession or open alcohol containers. The problem arises when systems include both of these types of crimes in their predictive model. Even though it might be true that nuisance crimes occur more frequently in poorer neighborhoods, they have little impact. However, their frequency can skew these models.
In an earlier time—for example, that of Otis Campbell on “The Andy Griffith Show”—public drunkenness was simply considered antisocial behavior, and people treated miscreants with sympathy and a warning. But predictive models and zero-tolerance policies have led to higher conviction rates, and individuals have been caught in downward spirals. Younger offenders are put into the prison system, creating a feedback loop in which they learn from hardened criminals. Then, when they get out, they have fewer prospects because of their criminal record, so eventually they recommit. Meanwhile, arrest rates in these neighborhoods go up and up, reinforcing the model. Plus, access to appropriate legal representation, which may be readily available to more affluent families, while those in poorer neighborhoods might have to make do with an overworked public defender, also affects the outcomes of these cases.
The use of such proxies is indicative of another truth: people usually make decisions based on their biases and feelings, not facts. The use of credit scores, broken-windows policing, and similar proxies gives people a sense of moral superiority. They let middle-class voters feel better about themselves. Some think petty criminals reside in poorer areas because they deserve to. Of course, this bias ignores the reality that the most heinous crimes—such as murder and rape, as well as white-collar crimes that often involve theft at massive scales—have very little correlation to geography. But there is one factor that does have a high correlation to geography: race.
Marketing of For-Profit Colleges
Internet marketing is big business and, with a large sample set to work with, consumer-facing industries are new deploying data science with speed, scale, and accuracy. Among O’Neil’s examples, some for-profit colleges and universities have developed significant math models, which lets them target and enroll the most vulnerable prospects. Rather than pursuing high-performing students for competitive academic programs, some for-profit colleges specifically target people because of their lower socioeconomic status, poor credit score, and other deficits. In fact, some recruiting manuals advise recruiters to employ such tactics in targeting prospective students:
“Welfare Mom / Kids. Pregnant Ladies. Recent Divorce. Low Self-Esteem. Low Income Jobs. Experienced a Recent Death. Physically / Mentally Abused. Recent Incarceration. Drug Rehabilitation. Dead-End Jobs—No Future.”
They often find these prospects through queries in search engines, which are complicit in serving up advertising that steers people to exploitative institutions. For example, a person might search for information on pay-day loans, and those keywords might also direct the person to improve their life by enrolling in a for-profit college.
But it gets worse! Assuming these prospects do enroll and graduate, the degrees from these unaccredited colleges are generally worth about as much as a high-school diploma, in terms of providing qualifications for employment. But it keeps getting even worse! In one case from O’Neil’s book, a for-profit university charged its students $68,000 for an online paralegal degree. However, many traditional colleges charge less than $10,000 for similar, accredited programs.
But the point of these colleges isn’t to provide a quality education. It’s to deceive people who are short on options and saddle taxpayers with government-backed debt. For many, the degree they earn won’t improve their employment prospects, but obtaining the degree does hurt their credit, and they’ll never be able to discharge their debt because of the bankruptcy laws in the US. However, for the executives at for-profit colleges, this is not a problem. They typically get paid a multiple of what their counterparts at public universities earn.
Building Weapons of Math Destruction
O’Neil presents three essential elements of a WMD, as follows:
Opacity—To illustrate the principle of opacity, O’Neil describes baseball statistics. Anyone who is familiar with the book or movie Moneyball can recognize that baseball is a game with sufficient data to make predictions of performance. The key thing here is that everyone—management, players, and fans—knows what is being measured and how, and they all know the goal. A baseball player and his manager may quibble over the relative importance of some measurements—for example, bases stolen as opposed to strikeouts—and there are some scouts who rely on their established biases—“He looks like he can hit.” However, ultimately, everyone can see the same data and agree on the outcomes.
Scale—In the book’s example describing teacher evaluations, a key problem was the sheer number of people who were affected. In the past, even though a teacher might have a bad boss, the poor management practice of that one person was limited. Now, school districts can scale up bad management practices that measure the wrong data and apply them across an entire organization. Even worse, the consulting firms that create these models sell them to other organizations. Similarly, the impact of an overzealous police officer patrolling one block is limited. But apply overzealous law enforcement to a nation and it’s oppressive. Ironically, while the impact of WMDs scales up dramatically, the people that these practices affect are isolated and left to suffer alone, because of the opacity and rigidity of these models and the misplaced trust that society puts in them.
Damage—In virtually every case, we see a negative feedback loop. Once people get caught in a WMD, it becomes harder for them to exit the downward spiral. People who find themselves caught up in these systems are likely to end up poorer and less healthy than before.
Weapons of Math Destruction describes how organizations design algorithms without giving any consideration to their impacts on people. One effect of such algorithms is that they increase the wealth disparity in our societies. Nobody should mistake this comment as a call for socialism. There will always be haves and have-nots. However, the broken models that systems use to attempt to predict recidivism, assess employability, and misdirect people into a low-quality education have the effect of trapping people in ever-worse circumstances, while the makers of these models accrue more and more capital by scaling their deployment.
When organizations apply these models to individuals and society, these WMDs do not predict outcomes—they condemn people to them. What is certain is that Weapons of Math Destruction destroy lives and ultimately reduce society’s capacity because they prevent people from fulfilling their potential.
It is difficult to do justice to O’Neil’s book in a review. Anyone who is concerned about why things are the way they are and why people believe what they do should read this book.
Ben began his career in 1999, when businesses were just beginning to recognize the World Wide Web as a valuable tool. Prior to his appointment at Kent State, he held positions as a UX designer and UX manager. He has worked with global teams and a variety of consulting firms to deliver research and design that improved digital experiences for customers. He has also developed his organizations’ analytics discipline to track the performance of digital properties and identify opportunities for improvement. Ben’s company TheoremCX is an innovation firm that provides customer-focused solutions. He has developed solutions and corporate workshops for a variety of organizations around the world, including Eaton, General Electric, Knoch Corporation, and Orange S.A. Ben is the chairperson of UX Akron, a nonprofit professional network serving Summit and Portage Counties, as well as all of Northeast Ohio. Read More