#sextrafficking | What COVID-19 Revealed About the Internet | #tinder | #pof | #match


All these developments have taken place under pressure from Washington and Brussels. In hearings over the past few years, Congress has criticized the companies—not always in consistent ways—for allowing harmful speech. In 2018, Congress amended the previously untouchable Section 230 of the Communications Decency Act to subject the platforms to the same liability that nondigital outlets face for enabling illegal sex trafficking. Additional amendments to Section 230 are now in the offing, as are various other threats to regulate digital speech. In March 2019, Zuckerberg invited the government to regulate “harmful content” on his platform. In a speech seven months later defending America’s First Amendment values, he boasted about his “team of thousands of people and [artificial-intelligence] systems” that monitors for fake accounts. Even Zuckerberg’s defiant ideal of free expression is an extensively policed space.

Against this background, the tech firms’ downgrading and outright censorship of speech related to COVID-19 are not large steps. Facebook is using computer algorithms more aggressively, mainly because concerns about the privacy of users prevent human censors from working on these issues from home during forced isolation. As it has done with Russian misinformation, Facebook will notify users when articles that they have “liked” are later deemed to have included health-related misinformation.

But the basic approach to identifying and redressing speech judged to be misinformation or to present an imminent risk of physical harm “hasn’t changed,” according to Monika Bickert, Facebook’s head of global policy management. As in other contexts, Facebook relies on fact-checking organizations and “authorities” (from the World Health Organization to the governments of U.S. states) to ascertain which content to downgrade or remove.

What is different about speech regulation related to COVID-19 is the context: The problem is huge and the stakes are very high. But when the crisis is gone, there is no unregulated “normal” to return to. We live—and for several years, we have been living—in a world of serious and growing harms resulting from digital speech. Governments will not stop worrying about these harms. And private platforms will continue to expand their definition of offensive content, and will use algorithms to regulate it ever more closely. The general trend toward more speech control will not abate.

Over the past decade, network surveillance has grown in roughly the same proportion as speech control. Indeed, on many platforms, ubiquitous surveillance is a prerequisite to speech control.

The public has been told over and over that the hundreds of computers we interact with daily—smartphones, laptops, desktops, automobiles, cameras, audio recorders, payment mechanisms, and more—collect, emit, and analyze data about us that are, in turn, packaged and exploited in various ways to influence and control our lives. We have also learned a lot—but surely not the whole picture—about the extent to which governments exploit this gargantuan pool of data.



Source link

————————————————————–

Source link

.  .  .  .  .  .  . .  .  .  .  .  .  .  .  .  .   .   .   .    .    .   .   .   .   .   .  .   .   .   .  .  .   .  .