The Risk Makers. The nuclear, auto, and food industries… | by Catherine Buni and Soraya Chemaly | Sep, 2020 | #facebookdating | #tinder | #pof

The violence in Myanmar, and Facebook’s apparent role in fueling it, was a disaster of unprecedented scale for the company. The company was reportedly warned for years of allegations of expanding ethnic violence. The problems were routinely exacerbated, critics say, by insufficient planning, translation services, and content moderation in the country.

A source who worked with Facebook on Myanmar policy and safety issues and had knowledge of the issues as they developed, who requested anonymity due to professional concerns, told OneZero and Type Investigations that the decision to add Burmese to Facebook was initiated by engineers on the fly during a time when Myanmar was liberalizing its telecommunications systems after the 2010 elections. A project like this, this individual said, is a “feel-good thing that sounds like it could only be positive… ‘Hey, well, let’s just support as many languages as possible.’ That turns out to be a really negative thing, in this case.”

An enduring concern in Myanmar was that posts inciting violence against the Rohingya often went undetected — a problem highlighted in a 2018 Reuters investigation. For example, one Facebook post reading “Kill all the kalars [an anti-Rohingya slur] that you see in Myanmar; none of them should be left alive,” was mistranslated by Facebook’s algorithms into benign-sounding gibberish: “I shouldn’t have a rainbow in Myanmar.”

Not only was Facebook limited in its ability to monitor what was being said in Myanmar, but because Zawgyi, rather than Unicode, was predominantly used in Myanmar, users in the country could not easily use Burmese script to type comments or posts, leaving citizens and activists to rely on memes, photography, and images to communicate. In addition, Benesch told us, at the time that she met with Facebook in 2014, the company was relying on an executive translation company in Dublin for its Burmese language needs. At that point, she says, “Facebook did not have a single person who could read Burmese on staff. ” (A Facebook spokesperson denied that the company relied on an executive translation firm. According to a Reuters report, the company did not add any Burmese speaking staff until 2015.)

At one point, prominent activists worked with Facebook, through Benesch, to develop a sticker pack, similar to today’s emoji options, to discourage online hate speech. “On the one hand, this was a tiny little thing scratching on the surface — we weren’t going to forestall genocide with a sticker pack, obviously, but it was a tiny thing that seemed better than nothing,” Benesch says.

It was not until October 2019 that Facebook integrated Unicode font converters into Facebook and Messenger. “These tools will make a big difference for the millions of people in Myanmar who are using our apps to communicate with friends and family,” the company stated.

A Facebook spokesperson acknowledged the company’s shortfalls, but emphasized the particularities of the situation and steps that the company has taken since then. Myanmar is the only country in the world with a significant online presence that hasn’t fully adopted and standardized Unicode, they said. More than 100 native Burmese speakers now review content for Facebook, and the company has the ability to review content in local ethnic languages. The company uses automated systems to identify hate speech in 45 languages, including Burmese, and since 2018 it has identified and disrupted six networks engaging in misinformation campaigns in Myanmar.

“As we’ve said before, we were too slow to act on the abuse on our platform at the time, largely due to challenges around reporting flows, dual font issues, and language capability,” the Facebook spokesperson told OneZero and Type Investigations. “We know that there continue to be challenges in Myanmar, but today we are in a much better position to address those challenges than we were in 2013.”

Similar language issues will continue to be a challenge for Facebook and other platforms, as they continue to expand globally. “The scale is just something that we have to keep in mind,” Necip Fazil Ayan, Facebook’s director of A.I., told us in 2018, pointing out that Facebook worked in 70 languages and served 6 billion translations a day that year. “Our goal is to keep improving quality. And keep adding languages.”

As of 2019, Facebook officially supported 111 languages, with content moderation teams working to identify needs in hundreds more.

As of 2019, Facebook officially supported 111 languages, with content moderation teams working to identify needs in hundreds more. It’s “a heavy lift to translate into all those different languages,” Monika Bickert, Facebook’s vice president of global policy and management, told Reuters in 2019.

Facebook declined to provide specific details about how the company decides to onboard and support new languages, but a Facebook spokesperson said the company considers “insights from a variety of sources, including policy input or regions where there is an increased potential for harm.”

Launching a product that could impact a community, region, or even whole country — particularly where history and political context are unfamiliar — without sufficient language resources, can be dangerous. Critics argue that the problem is bigger than just a language barrier, and the solution isn’t simply better translations and machine learning. Instead, they say companies should take a more deliberate and reasoned approach when deciding to expand into parts of the world where they don’t fully understand the political and cultural dynamics.

“Adding a bunch of languages is a separate process from, ‘We’re going to move into a country and we’re going to specifically think about the structure of who is in that country,’” a former Facebook executive, who requested anonymity in order to speak candidly, tells OneZero and Type Investigations. “The disconnect between these processes is problematic.”

The move into Myanmar echoed other hasty product developments at Facebook. Facebook Live, which has been used to record and distribute, for example, suicides, rapes, child endangerment, murder, and hate crimes was also reportedly rushed to market. Zuckerberg cheerfully introduced the service in April 2016. The new feature, he wrote in a Facebook post, will be “like having a TV camera in your pocket.”

At the time, the company had amassed nearly 2 billion monthly users and was managing a 24/7 stream of complaints and problems, including early warnings in 2015 that Cambridge Analytica was helping Ted Cruz’s presidential campaign by forming psychological profiles of potential voters, using data that had been mined from tens of millions of Facebook users.

It’s unclear from OneZero and Type Investigations’ reporting how much prelaunch risk assessment was done around Facebook Live. Stamos, the company’s chief security officer at the time, was told about the launch “a couple months” before the product was pushed to the Facebook app reportedly in response to growing competitive pressures from Snapchat in particular.

In addition to reportedly keeping its chief security officer in the dark until the release of the product was imminent, the policy, and trust and safety teams were similarly left out of the loop until a relatively late stage. According to a source familiar with the situation, the teams recommended that the product launch be delayed. But the suggestion was ignored.

A Facebook spokesperson denied the suggestion that the product rollout was done in haste. “When we build a product we always think both about the ways the product can be used for good in the world (the vision of the product) and the types of bad things that can happen through the product,” a Facebook spokesperson told OneZero and Type Investigations. “With Facebook Live we did just that.”

But the scene at Facebook headquarters in the days following the launch of Facebook Live was, in the words of a consultant familiar with the incidents and who requested anonymity for professional reasons, a “shitshow.” According to two sources with knowledge of the episode, the Facebook Live team worked around the clock to remove videos of suicides, rapes, and other acts of violence. By 2017, Facebook was struggling daily to contain the damage, as stories of live-broadcasted violence filled the news.

Relative to other companies, Facebook has been open to speaking publicly about how operations are evolving in response to increased awareness and understanding of the risks their products introduce to individuals and to society. When we met in 2018, Guy Rosen, Facebook’s VP of product management at the time, acknowledged the problems associated with the launch of Facebook Live. After “a string of bad things,” he said, “we realized we had to go faster” to address issues with the service. The company pulled together members of various teams, who dropped other priorities and spent two to three months in a lockdown, focused solely on resolving the issues with the new service.

Stamos says there were legitimate arguments over the best way to identify and prioritize problems like those faced by the Facebook Live team at the time. He sketched a simple X-Y coordinate grid, with circles of various sizes representing the prevalence of certain risks, the probability of them occurring, and their potential impact.

He and his team used such grids to evaluate the likelihood of certain harms — child sexual exploitation, terrorism, hate speech, and other abuses of Facebook products — and target their efforts accordingly. “We have a finite amount of resources. How are we going to apply those finite resources?” Stamos says. If safety teams had been looped into Facebook Live’s planning earlier on his team might have been able to help prevent the problems that occurred.

In the two years following Facebook Live’s launch, Facebook would reevaluate its approach to risk. In May 2018, Greg Marra, then Facebook’s project management director who oversaw News, spoke publicly for the first time about Facebook’s turn to an integrated approach to risk in an attempt to prevent harm and the creation of coordinated, cross-functional teams. “There is a lot that we need to do to coordinate internally, from Facebook, Instagram, Messenger, WhatsApp, and Oculus, and we need a standard approach to this,” he says.

“Moving from reactive to proactive handling of content at scale has only started to become possible recently because of advances in artificial intelligence.”

Six months later, in November 2018, Zuckerberg also announced a significant shift in how Facebook planned to handle risk. “Moving from reactive to proactive handling of content at scale has only started to become possible recently because of advances in artificial intelligence — and because of the multi-billion dollar annual investments we can now fund,” he said. “For most of our history, the content review process has been very reactive.”

Integrating the work of teams across functions, to move from reactive to proactive, had become the number-one focus of the company. When Stamos left, under fractious circumstances involving disagreements over possible Russian interference in the 2016 U.S. election, his position as chief security officer was not filled. Instead, a Facebook spokesperson told the Verge, the company “embedded […] security engineers, analysts, investigators, and other specialists in our product and engineering teams to better address the emerging security threats we face.”

In 2018, Guy Rosen told us that Zuckerberg had come to him ready to invest substantially. “How much do you need?” he remembered Zuckerberg asking him. “This is the most important thing. We have got to staff this up.’”

Over the past two years, the company has more than doubled the number of people working on safety and security, to roughly 35,000. It has formalized the use of cross-functional teams to address siloing and blind spots and it has located those teams under Integrity, under Rosen, who has taken on the title of VP of integrity. Additionally, in July 2019, Facebook added a new role, naming human rights activist Miranda Sissons as its director of human rights, an acknowledgment of its influence on conflicts and humanitarian crises. Her first official trip was to Myanmar.

In multiple interviews and email exchanges, Facebook executives and spokespeople described the workings of the cross-functional teams, organized into two primary categories: those who identify and mitigate risk, and those who focus on risks related to specific events, such as elections or crises that might prompt spikes in online activity.

Both types of teams conduct proactive and postmortem investigations of risks. They then work with policy and integrity staff. The integrity team — reportedly comprising some 200 employees and overseen by Naomi Gleit, vice president of product and social impact — is made up of cross-functional groups responsible for understanding the political and cultural dynamics and conditions of regions that Facebook operates in. These are the teams that Sophie Zhang often collaborated with.

“I think the public attention has helped motivate engineers to work on this,” Rosen says. “These are really hard problems. People haven’t done this yet.”

While internal sources described to us stubborn difficulties with the process, including a lack of transparency between different functional areas and poor internal communications resulting in duplication of effort, Facebook highlighted to us products and features that it said had improved as a result of the changes implemented to allow Facebook to be more proactive: the use of photo technologies to fight child exploitation, sexual abuse, and the spread of terrorist content; the development of safety measures in Messenger Kids; improved safety, privacy, and authentication processes for Facebook Dating; and heightened election-related security. In 2019, the company said, its teams removed more than 50 operations engaged in “coordinated inauthentic behavior,” compared to one network that it identified as manipulative in 2017.

“I think the public attention has helped motivate engineers to work on this.”

Nonetheless, in March 2019, Brenton Tarrant, a 28-year-old Australian, was able to activate Facebook Live and use it to broadcast his killing of more than 50 people over the course of 17 minutes. Then, 12 minutes after the attack, a user flagged the video as a problem, and it took nearly an hour, after law enforcement contacted Facebook, for the company to remove it. But by that time, the content had reached, and inspired, countless others.

Similarly, in an echo of what happened in the build-up to violence in Myanmar, Cambodian monk and human rights activist, Luon Sovath, was forced in August 2020 to flee government prosecution. A significant contributor to his exile, according to recent reports, was misinformation and disinformation shared on Facebook. The company took almost a month to remove the page and the four doctored videos that endangered and defamed him.

“As a company, you would think they would want to be more vigilant and not allow their platform to be misused,” said Naly Pilorge, the director of the Cambodian League for the Promotion and Defense of Human Rights in a recent New York Times interview. “Facebook’s reaction has been like little drops from a sink, so late and so little.”

“The obvious solution is to slow down [user generated content],” says Sarah T. Roberts, a co-founder of UCLA’s Center for Critical Internet Inquiry. Roberts has repeatedly proposed that tech companies build in access tiers to functionalities, such as livestreams, so that users are vetted before they can use a particular product. So far, however, companies have resisted such reforms. “The idea,” she says, “is never seriously discussed.”

“Safety and security are not compromised in the name of profits,” says a Facebook spokesperson. “Mark has made that clear on previous occasions — noting our investment in security is so much that it will impact our profitability, but that protecting our community is more important than maximizing our profits.”

According to Sabrina Hersi Issa, integrated “risk resilient” technology and systems are critical. An activist and angel investor who focuses on the intersection of technology and human rights, Hersi Issa advises companies on how to create, staff, and fund inclusive systems that center human values alongside profits. “It’s often the case when looking at risk, that tech companies see things as products and platforms,” she says. “When I look at a piece of tech, I ask myself, How is this piece of technology facilitating participation in democratic life? That reframing adds layers of complicated considerations that most technologists don’t consider.”

In the words of one recent security report, adding more products and practices “on top of an existing infrastructure” is no longer enough.


Source link

.  .  .  .  .  .  . .  .  .  .  .  .  .  .  .  .   .   .   .    .    .   .   .   .   .   .  .   .   .   .  .  .   .  .