Facebook is not responding to the alarms its employees have raised regarding the treatment of minorities and vulnerable populations in the developing world. None of these revelations should come as a surprise. Sean Parker, the founding president of Facebook, warned us of this very thing in 2017. He admitted that Facebook’s goal is to “consume as much of your time and conscious attention as possible.” The social-media giant is guilty of “exploiting a vulnerability in human psychology.” Parker said that he, Mark Zuckerberg, and Kevin Systrom, co-founder of Instagram, “understood this consciously. And we did it anyway.”
The Ethics of AI and the Big-Five Tech Giants
How could this happen? In the absence of strict government regulations, the men behind the algorithms—“billionaire overlords” as Parker refers them and himself—at Facebook, Google, Apple, Amazon, and Microsoft have been left to police themselves. With growth as their only goal, there’s no motivation for these tech giants to slow their roll, take stock, or ask whether they are doing the right thing. If they had wanted to do the right thing, Facebook would have scrapped Instagram for Kids, its product targeting children under thirteen, as soon as executives saw the results of their own research on what Instagram does to young teenagers. The company would have chucked their Ray-Ban Stories smart glasses in the bin when journalists pointed out how these glasses could result in the exploitation of women, children, and other vulnerable populations. (If you’re unconvinced regarding Zuckerberg’s growth goals, check out Ben Grosser’s film Order of Magnitude. In it, Grosser compiles every public instance of Zuckerberg talking about becoming more and bigger.)
Are these tech giants considering the ethical impacts of their products? In 2016, Facebook, Google and its DeepMind subsidiary, Amazon, Microsoft, and IBM teamed up to create the nonprofit Partnership on AI. Apple joined the group in 2017. According to The Verge, the group’s two key goals are to educate the public about AI and collaborate on AI best practices, with input from researchers, philosophers, and ethicists. If we overlook the fact that the group should address ethical considerations before selling the benefits of AI to the general public, they have nevertheless been conducting solid research and asking the right questions. Mustafa Suleyman, co-founder of DeepMind and co-chair of the Partnership on AI, has said that the Partnership is committed to an open dialogue about the ethics of AI, while also admitting that the group is not a regulatory body and cannot enforce rules on its member organizations. Even so, they’ve published a set of tenets to guide the organization. Sadly, the only way to find these tenets today is to dive into the Internet Archive’s Wayback Machine. Tenet 3 states, “We are committed to open research and dialogue on the ethical, social, economic, and legal implications of AI.” However, it’s impossible to engage in an open dialogue with the guiding principles of these organizations hidden from public view.
The Health Ethics & Policy Lab published a report in 2018 that mapped and analyzed the state of ethical AI in private companies, research institutions, and governments around the globe. This report describes convergence around five of eleven key principles: transparency, justice and fairness; nonmaleficence, a technocratic do-no-harm policy; responsibility, and privacy. Of the 84 ethical principles and AI guidelines that they’ve identified, organizations in the United States published 20 of them. For-profit companies have written many of these guidelines, including Google, IBM, Intel, and Microsoft, while Amazon, Apple, and Facebook were conspicuously absent from this list. Amazon executives say they are doing work in this area, but are uninterested in discussing what they are doing with anyone outside of Amazon, which is too bad because they do not have a great track record for developing their own AI solutions. For an example, just look at their sexist AI recruiting tool. Facebook claims to have an in-house ethics team, but they are also keeping mum about their work.