How the Coronavirus Is Changing Facebook Moderation | #facebookdating | #tinder | #pof


The early promise of the web—that it would be a place for ingenuity and shared knowledge—has been glimmering for everyone to notice. Though just months ago we were a couple of solid years into a big-tech backlash, each day bringing new questions about the surreal powers of companies such as Facebook and Google and Apple, today we feel grateful to have them, and blessed to use their products for most of our waking hours.

“The coronavirus crisis is showing us how to live online,” The New York Times’ Kevin Roose argued, as states directed residents not to leave their home. “After spending years using technologies that mostly seemed to push us apart, the coronavirus crisis is showing us that the internet is still capable of pulling us together,” he wrote. “Has coronavirus made the internet better?” The New York Times’ Jenna Wortham asked a couple of weeks later, concluding that it had.

It’s a tempting thought, but a premature one. Major platforms are struggling to adapt to enormous amounts of additional activity and strange new use cases. Moderation decisions that were difficult under the best of circumstances, with people responsible for them, are now being made by artificial intelligence. Platforms that had big user bases now have huge user bases, making the exploitation of security flaws far more worthwhile. Companies that were hoovering up our personal data when we spent eight hours a day on our phones are now in touch with our most intimate anxieties and desires around the clock. The internet feels better only because it’s all we have—and all the pressure we’re putting on it may, ultimately, make things worse.


As stay-at-home orders rolled out across the country, Facebook announced that it would send workers home, including content moderators, explaining that many of them would be unable to do their jobs at home for various reasons: The data they look at are sensitive, and shouldn’t be pulled up on a home network, the jobs they perform are emotionally taxing and require on-site resources, etc. Some human moderators are still working, but Facebook, along with other major internet platforms such as YouTube and Twitter, announced that it would be relying far more on artificial intelligence than before, which it acknowledged would lead to mistakes.

AI content moderation has a lot of limitations. It’s a blunt instrument solving a problem that has endless permutations, and it can produce both false negatives and false positives. A computer can deduce a lot about a video or a sentence: how many people have seen it, which IP addresses are sharing it, what it’s been tagged as, how many times it’s been reported, whether it matches with already-known illegal content. “What it’s not doing is looking, and itself making a decision,” says Sarah T. Roberts, an internet-governance researcher at UCLA. “That’s what a human can do.” As a result, moderation algorithms are likely to “over-police in some contexts, over-moderate in some contexts, and leave some other areas virtually uncovered,” Roberts told me. “All of the benefit of having the human moderation team, their cognitive ability, their sense-making ability, their ability to deal with a whole host of types of content, not just the ones for which they were expressly designed, and so on, get lost.”





Source link

.  .  .  .  .  .  . .  .  .  .  .  .  .  .  .  .   .   .   .    .    .   .   .   .   .   .  .   .   .   .  .  .   .  .