Nonetheless, in March 2019, Brenton Tarrant, a 28-year-old Australian, was able to activate Facebook Live and use it to broadcast his killing of more than 50 people over the course of 17 minutes. Then, 12 minutes after the attack, a user flagged the video as a problem, and it took nearly an hour, after law enforcement contacted Facebook, for the company to remove it. But by that time, the content had reached, and inspired, countless others.
Similarly, in an echo of what happened in the build-up to violence in Myanmar, Cambodian monk and human rights activist, Luon Sovath, was forced in August 2020 to flee government prosecution. A significant contributor to his exile, according to recent reports, was misinformation and disinformation shared on Facebook. The company took almost a month to remove the page and the four doctored videos that endangered and defamed him.
“As a company, you would think they would want to be more vigilant and not allow their platform to be misused,” said Naly Pilorge, the director of the Cambodian League for the Promotion and Defense of Human Rights in a recent New York Times interview. “Facebook’s reaction has been like little drops from a sink, so late and so little.”
“The obvious solution is to slow down [user generated content],” says Sarah T. Roberts, a co-founder of UCLA’s Center for Critical Internet Inquiry. Roberts has repeatedly proposed that tech companies build in access tiers to functionalities, such as livestreams, so that users are vetted before they can use a particular product. So far, however, companies have resisted such reforms. “The idea,” she says, “is never seriously discussed.”
“Safety and security are not compromised in the name of profits,” says a Facebook spokesperson. “Mark has made that clear on previous occasions — noting our investment in security is so much that it will impact our profitability, but that protecting our community is more important than maximizing our profits.”
According to Sabrina Hersi Issa, integrated “risk resilient” technology and systems are critical. An activist and angel investor who focuses on the intersection of technology and human rights, Hersi Issa advises companies on how to create, staff, and fund inclusive systems that center human values alongside profits. “It’s often the case when looking at risk, that tech companies see things as products and platforms,” she says. “When I look at a piece of tech, I ask myself, How is this piece of technology facilitating participation in democratic life? That reframing adds layers of complicated considerations that most technologists don’t consider.”
In the words of one recent security report, adding more products and practices “on top of an existing infrastructure” is no longer enough.
What might a more integrated approach to risk look like? Gathering input from various departments and a diverse set of stakeholders is important, but not sufficient on its own. Individuals who are tasked with assessing risk also need the agency and authority to be part of the final decision-making, experts say.
“Even if you have cross-functional teams, the voices that bring these concerns up are sometimes just never heard or heeded, or not given the same gravity,” says Leslie Miley, a former engineering executive at Slack, Twitter, Google, and Apple, and former chief technology officer at the Obama Foundation. “Because people don’t have that lived experience or they just don’t think it’s that big of a deal. This is something that I see regularly.”
In every company we investigated, engineers and product managers — groups overwhelmingly male — hold power. Meanwhile, leadership of legal, policy, trust and safety teams — sometimes referred to as “cleanup crews” — often skews female, and are more diverse, as Sarah Emerson recently observed for OneZero.
“If I had a meeting with Trust and Safety, especially if it was a senior one, I’d be the only man in the room,” says Stamos of his time at Facebook. “Then if I had a meeting on the Infosec side, it would be all guys, or maybe one woman.”
The problem of occupational gender segregation is endemic in tech. “You’d be hard-pressed at Google not to work with a woman. Although I do know groups on Google that are 20, 30, 40, 50 people that have no women,” says a source at Google who requested anonymity. When asked for comment on this description, and after a series of conversations that included requests for detailed information about its approach to risk assessment and mitigation, a Google spokesperson replied via email: “As digital threats evolve, the lines that distinguish traditional security threats from platform and product abuse have become increasingly blurry. The combined expertise of our security and Trust & Safety teams, along with their years-long partnership, have enabled us to develop strong protections for our users.”
Disparities like these are also racialized. A study released this summer, based on 2016 data, found that 10 major tech companies in Silicon Valley had no Black women on staff at all. Three large tech companies had no Black employees in any position, the study found. During the past four years, industry analysts have noted the slow pace of change. In 2019, 64.9% of Facebook’s technical teams were composed of white and Asian employees, 77% of whom were male.
It’s not hard to find pernicious examples of how this homogeneity impacts risk perception and product development.
In 2017, Snapchat released a feature, Snap Maps, that displayed a user’s geolocation and then, based on settings, shared their whereabouts with others. That sparked outrage from advocates who recognized the risks the feature posed to children and targets of stalking and intimate partner abuse.
The following year, Lime, a scooter-share startup, faced a backlash over a security feature designed to protect its scooters. According to news reports, when people in Oakland, California, attempted to handle a scooter without first downloading the app and paying to use it, the scooter would announce, “Unlock me to ride me, or I’ll call the police.” Local activists and a politician — sensitive to issues of overpolicing and discrimination against Black individuals in the law enforcement system — protested, arguing that the announcement endangered Black citizens.
And in 2019, DeepNude, an app that used A.I. to virtually strip women (it did not work on photos of men), withdrew its product roughly 24 hours after releasing it, amid a widespread outcry. The development team tweeted that “the probability that people will misuse it is too high.”
Each of these failures, says Miley, emphasizes how experiences determine how a person appreciates risk. Or doesn’t. Slovic is among the many researchers who have long documented the risks of having only white men in the room where decisions about safety and harm happen. “Most striking,” reads one finding from 1994, “white males tended to differ from everyone else in their attitudes and perceptions — on average, they perceived risks as much smaller and much more acceptable than did other people. These results suggest that sociopolitical factors such as power, status, alienation, and trust are strong determiners of people’s perception and acceptance of risks.”
To offset cognitive and structural problems, risk assessment requires “a total shift in intentions,” says Ellen Pao, a former VC, Reddit CEO, and now CEO of Project Include. “Security and privacy in tech at least have been male-dominated areas for as long as they’ve been around.”
Like Slovic, Pao believes the tech industry needs to embrace inclusivity and interdisciplinarity as central practices and give more power, status, and compensation to those tasked with traditionally feminized “soft” skills tied to safety and care.
Such a shift is toothless without accountability and leadership, however.
“There’s a lot of public criticism of Facebook that’s really accurate, but there are more people working on the safety of social media at Facebook than probably in the rest of the world combined,” says Stamos. “But then you have these problems where an executive decision is made that just ignores those people, and then it completely blows away all the good work they’re doing.”
When asked about his work with the tech sector, Slovic’s Decision Research co-founder Baruch Fischhoff, an academic who has served as an advisor on risk to a wide array of federal regulatory agencies, including the FDA, DHS, and the EPA, says, “it’s difficult to distinguish malice from ineptitude from cluelessness. I think risk assessment is doomed to fail unless the CEO is deeply invested in it.”
A Facebook spokesperson stressed that no matter how proactive the company’s risk assessment efforts might be, there would always be more work to do.
“In a perfect world, we’d have perfect information and be able to act on it to prevent and mitigate risk and harm,” they said. “We’ve tried to put in place robust risk assessment mechanisms and are always working to anticipate risks, and learn lessons along the way, but we are all operating with less than perfect information.”
Identifying more information, better algorithms, and enhanced technology doesn’t fundamentally reflect “a total shift of intentions,” however. Arguably, that approach, grounded in tech fixes, detrimentally doubles down on existing ones. Data and information after the fact, as predictive inputs, however necessary, are not sufficient. “Just imagine if these companies had said, ‘We’re going to hold onto launching this new feature or capability. We need another one and half years,’” says designer and technologist professor Batya Friedman, co-author of the book Value Sensitive Design: Shaping Technology with Moral Imagination. “These systems are being deployed very, very fast and at scale. These are really, really hard problems.”
Moreover, critics say that major social media companies have kept outside researchers at arm’s length, resisting efforts to learn more about harmful content and how to prevent it. In 2018, for example, Twitter launched a study designed to promote civility and improve behavior on the platform, and collaborated with Susan Benesch and Cornell’s J. Nathan Matias, founder of the Citizens and Technology Lab. The company ended up abandoning the project, citing coding errors. A follow-up study, which began last summer, operated for only a few days before Benesch says it was shut down internally without any explanation.
“They squandered a really good opportunity to see what could diminish hate and harassment online,” Benesch says. “What a foolish thing to just throw out the window.”
In a statement, Twitter acknowledged that staff turnover and shifting priorities had stymied some research projects, but said it remained committed to working with academics. “We strongly believe in the power of research to advance understanding of the public conversation happening online,” a Twitter spokesperson said.
In the meantime, companies and the public remain exposed.
In July 2020, authorities charged 17-year-old Graham Ivan Clark, from Tampa, Florida, with hacking the Twitter accounts of a number of prominent individuals, including Bill Gates, Elon Musk, and Barack Obama. It was an embarrassing failure for Twitter, whose security team hadn’t identified that employee accounts were vulnerable to what’s well-known in security circles as social engineering.
To Sarah T. Roberts, such problems are a direct consequence of the tech industry’s resistance to outside opinions and expertise, and highlight the need to embrace a more collaborative, transparent, and structural approach to risk assessment.
“We can’t afford the continual use of the public for unwitting beta testers,” Roberts says. “We’re here today because of 40 years of denigration of anything that doesn’t have an immediate industrial application.”
Safiya Noble put it like this: “Paradigm shifts have to be imagined in order to organize our economies and our societies differently. I look at Big Tech in a similar vein to Big Tobacco or Big Cotton and ask, what is the paradigm shift? Is it legitimate to have deep human and civil rights violations that cannot be separated from these technologies? Is that legitimate? To justify their existences? We know better. The challenge here is that the technologies are often rendered in such opaque ways that people can’t see the same level of exploitation that happened in other historical moments. Our job as researchers is to make visible the harms.”
The 17-year old alleged Twitter hacker’s July arrest prompted many to ask how one of Silicon Valley’s most prominent companies could be so vulnerable. While Clark was considering his $725,000 bail, Twitter was crafting yet another apology, rinsing and repeating the tech sector’s almost 20 years of after-the-fact mea culpas.
“Tough day for us at Twitter,” Jack Dorsey tweeted after the breach. “We all feel terrible this happened. We’re diagnosing and will share everything we can when we have a more complete understanding of exactly what happened. ? to our teammates working hard to make this right.”
There are reasons to doubt that tech leaders will slow down to adopt the kind of paradigm shift Noble describes on their own. Some 16 years after Facebook’s launch, calls are growing for government regulation of the tech industry and a renunciation of a business model that profits from the idea that content is “neutral” and platforms are objective, a model that, critics point out, cashes in on engagement and extremism.
During a congressional hearing in late July 2020, following a 13-month investigation by the House Judiciary antitrust subcommittee into the business practices of Apple, Facebook, Google, and Amazon, Rep. Hank Johnson, a Democrat from Georgia, questioned Zuckerberg about predatory market behavior. “You tried one thing and then you got caught, made some apologies, then you did it all over again, isn’t that true?” he said.
“Congressman,” replied Zuckerberg, “I respectfully disagree with that characterization.”
Less than two months later, Facebook apologized after it was revealed that the company had failed to remove incendiary and violent posts from the platform in relation to counter-protests in Kenosha, Wisconsin. In what Zuckerberg characterized as an “operational mistake,” contracted moderators unfamiliar with the “militia page” where the comments were made had ignored more than 450 event reports. During one chaotic evening of protests soon after, two protesters were shot and killed.
The July hearings have been described as Big Tech’s “Big Tobacco Moment,” and it is clear that some form of regulatory control is not far down the road. The form it will take — an emphasis on consumer protections, market restrictions, or civil and human rights — has yet to be seen.
“We may decide we’re not wise enough for certain kinds of tools, or certain kinds of companies,” says Friedman. This was, in fact, observed last April when the European Union’s High-Level Expert Group on A.I. published ethical guidelines for the development of artificial intelligence, and suggested that certain technologies, such as facial recognition, “must be clearly warranted.” The same conclusion was drawn by some 20 U.K. councils that recently withdrew the use of algorithms to make decisions about everything from welfare to immigration to child protection, acknowledging that the risk of harm is too high to justify their application. In September, the Portland City Council became the first in the U.S. to restrict the use of facial recognition not only by public agencies, but also by businesses who might seek to use the technology in public settings such as parks, malls, or grocery stores.
To date, harm mitigation, regulation, and even withdrawal of a potentially harmful technology are voluntary — Microsoft’s recent announcements of commitments to “integrated security” and refusal to sell facial recognition technology to U.S. police departments until Congress acts to establish limits, for instance — and, critics say, don’t go far enough.
“Look at Big Tobacco,” says Noble. “Look at fossil fuels. Where is the evidence of that working except in the interest of these companies?”
Big Tech’s lobbying investment is already equal to or higher than spending by big banks, pharmaceutical manufacturers, and the oil industry. The largest tech companies all have well-staffed offices in Washington, and yet they are not subject to the scope or formal federal risk regimes as are these other industry sectors. Google and Amazon are among those financing an institute dedicated to “continuing education” for regulators, teaching “a hands-off approach to antitrust.”
Will Silicon Valley be more risk-aware in the future? Only those in power can say. While calls for a more activist public are evergreen, reliance on the demonstrably diminishing power of the people is naive. “The government should be passing laws to discipline profit-maximization behavior,” said Marianne Bertrand, an economics professor at the University of Chicago’s Booth School of Business. “But too many lawmakers have themselves become the employees of the shareholders — their electoral success tied to campaign contributions and other forms of deep-pocketed support.”
Friedman cautioned against oversimplification and polemic. Not all risks can be known. And even the most robust risk assessments and content moderation protocols won’t prevent every instance of harm. “Remember, tool builders aren’t all-powerful, and better tools in and of themselves won’t change the reality of genocide, rape, suicide, and on and on and on,” she said. The goal, she argues, should be improvement, not perfection. “Design is about envisioning an alternative that’s better and moving toward that alternative. That often means breaking and restructuring current conditions.”
“I think we can say we haven’t worked hard enough to develop professional best practice in Big Tech,” Friedman says. Ignoring human values “is not a responsible option.”
There is a growing sense of urgency to address massive concerns in the lead-up to the U.S. presidential election. Twitter and Facebook have both implemented various measures to address political disinformation on their platforms — flagging disinformation or blocking political ads immediately before the election, for example — but these solutions may not go far enough, and the stakes could not be higher.
“Social media companies need to step up and protect our civil rights, our human rights, and our human lives, not to sit on the sideline as the nation drowns in a sea of disinformation,” said Rep. Mike Doyle, D-PA, during a House Energy and Commerce subcommittee hearing on disinformation in June. “Make no mistake, the future of our democracy is at stake, and the status quo is unacceptable.”
Meanwhile, on the other side of the world, Facebook will also have to face its past failures. Myanmar is scheduled to hold a general election this year, on November 8, only its fourth in six decades. It stands to be another major test for Facebook, and the company is working to make sure bad actors don’t use its platform to spread disinformation or otherwise meddle in the democratic process.
One of the Facebook employees we spoke with says the company has been monitoring what is happening on the ground in Myanmar and holding meetings with different groups across the country in order to better understand risks in context. Facebook told us that it is “standing up a multi-disciplinary team,” including engineers and product managers, focused on understanding the impact of the platform in countries in conflict, to develop and scale language capacity and the company’s ability to review potentially harmful content.
That team, however, may not include representatives from the advertising department. Teams responsible for reviewing ads are separate from those who review user-generated content, said a Facebook spokesperson. “People in Facebook’s ad sales department are working to increase ad content and business ads in Myanmar. This is about trying to get more people on the platform,” a knowledgeable Facebook executive told us this spring, later adding, “There was even a time where people internally were proposing to turn off ads in Myanmar because of the upcoming election. Ultimately, that was not chosen, but it was discussed.”
They paused. “This seems really difficult and tone-deaf to [those of us] thinking about risk, because all of those things come down to our human reviewers, and we already have such little capacity. We have confirmed that we won’t be able to get a lot more capacity, we have a very high-stakes election coming up, and a history of real violence in this place. So what are we setting ourselves up for? Some kind of disaster, right?”