Bay Area! I’ll be talking with Anna Wiener about Uncanny Valley, her brilliant new memoir of a life in tech, on February 4th at Manny’s in San Francisco. It’s our second-ever Interface Live event, and it would mean the world to me if you came to say hello and talk tech and democracy with us. Get your tickets here!
Last June, after a series of developments related to facial recognition and customer tracking, I warned that a Chinese-style social credit system was beginning to take shape in the United States. Among other things, a school district in western New York announced plans to deploy a facial-recognition system to track students and faculty; the Washington Post reported that airports had accelerated their use of facial-recognition tools, and the United States began requiring visa applicants to submit social media profiles along with their applications.
That column left open the question of what role American law enforcement might play in building a system that feels increasingly dystopian. But now, thanks to a superb investigation by Kashmir Hill, we know much more. Hill tells the story of Clearview AI, a small and mostly unknown company that has been scraping publicly available images — including billions from Facebook, YouTube, and Venmo profiles — and selling access to the police. She writes:
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.
Hill’s report is chockablock with surprising details, and you should read it in full if you haven’t already. When it landed online Saturday, it galvanized discussions around how quickly tech companies are eroding privacy protections, with Congress remaining idle so far despite years of discussions around a national privacy law.
Some threads to pull on.
Is this legal? As Ben Thompson explains today in a paywalled post, LinkedIn sued a company that had scraped its public profiles in a fashion similar to Clearview. But it lost the lawsuit, seemingly giving a green light to other companies seeking to do the same thing. Last year, Facebook told Congress that it gathers information about logged-out users to prevent this sort of scraping. But former Facebook chief security officer Alex Stamos explained to me that actually preventing that scraping is much easier said than done.
Is this the end of privacy? No, because laws protecting individual privacy can still be effective — even at the state level. On Tuesday, the Supreme Court declined to hear an appeal from Facebook on a case involving the company’s use of facial-recognition technology. Facebook used the tech to tag photos with user names, running afoul of an Illinois law requiring companies to get their consent first. As a result, Facebook will likely have to face a multi-billion-dollar class action lawsuit. A strong federal privacy law could make products like Clearview’s illegal, or regulate them to offer protections from some of the more obvious ways the technology will be misused.
Is our current freak-out about facial recognition ignoring the larger point? Surveying recent municipal efforts to ban use of the technology by law enforcement, Bruce Schneier argues persuasively that we need to take a broader view of the issue. We can be (and increasingly are) tracked in all manner of ways: by heart rate, gait, fingerprints, iris patterns, license plates, health records, and (of course) activity on social networks. The forces working to end individual privacy are a hydra, Schneier argues, and need to be dealt with collectively. He writes:
The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heart beats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.
Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.
Are privacy experts being needlessly alarmist? I try to ration my alarmism judiciously in this newsletter. But once you start looking for examples of companies using their data to build social-credit systems, you find them everywhere. Here, from earlier this month, is a tool Airbnb is developing to evaluate the risks posed by individual risks:
According to the patent, Airbnb could deploy its software to scan sites including social media for traits such as “conscientiousness and openness” against the usual credit and identity checks and what it describes as “secure third-party databases”. Traits such as “neuroticism and involvement in crimes” and “narcissism, Machiavellianism, or psychopathy” are “perceived as untrustworthy”.
Who will this tool discriminate against? And what recourse will those discriminated against have? These are two questions we should take into any discussion of technology like this.
Finally, is there a good Marxist gloss on all this? Sure. Here’s Ben Tarnoff with a provocative piece in The Logic calling for a revival of Luddism to counter oppressive technology of the sort Clearview manufactures. (His piece predates Hill’s by a couple days, but the point stands.)
One can see a similar approach in the emerging movement against facial recognition, as some city governments ban public agencies from using the software. Such campaigns are guided by the belief that certain technologies are too dangerous to exist. They suggest that one solution to what Gandy called the “panoptic sort” is to smash the tools that enable such sorting to take place.
We might call this the Luddite option, and it’s an essential component of any democratic future. The historian David F. Noble once wrote about the importance of perceiving technology “in the present tense.” He praised the Luddites for this reason: the Luddites destroyed textile machinery in nineteenth-century England because they recognized the threat that it posed to their livelihood. They didn’t buy into the gospel of technological progress that instructed them to patiently await a better future; rather, they saw what certain technologies were doing to them in the present tense, and took action to stop them. They weren’t against technology in the abstract. They were against the relationships of domination that particular technologies enacted. By dismantling those technologies, they also dismantled those relationships?—?and forced the creation of new ones, from below.
Last June, writing about the rise of American social credit systems, I noted that they were developing with very little public conversation about them. The good news is that the public conversation has now begun. The question is whether advocates for civil liberties will be able to sustain that conversation — or to turn it into action.
Today in news that could affect public perception of the big tech platforms.
Trending up: European businesses say that using Facebook apps helped them generate sales corresponding to an estimated EUR 208 billion last year, which translates to about 3.1 million jobs. The news comes from a study Facebook commissioned with Copenhagen Economics.
Trending down: A new analysis of coordinated inauthentic behavior on Facebook shows the social network is still failing to keep up with the spread of disinformation and media manipulation on the platform. Analysts are still calling for Facebook to release more information about the coordinated campaigns to increase transparency in the process.
First off today, a call for help from our friends at Vox.com. The California Consumer Privacy Act gives Californians certain rights over the data businesses collect about them. Have you taken advantage of this new law? Fill out this form to help Vox’s reporting on what happens when you do: http://bit.ly/2NMn19o
? Apple, Amazon, Facebook and Google took a public lashing at a congressional hearing on Friday. Some of their smaller rivals, including Sonos and Tile, pleaded with federal lawmakers to take swift action against Big Tech. Tony Romm at The Washington Post has the story:
The pleas for regulatory relief resonated with lawmakers, led by Rep. David N. Cicilline (D-R.I.), the chairman of the House’s top antitrust committee. “It has become clear these firms have tremendous power as gatekeepers to shape and control commerce online,” Cicilline said to open the session.
The hearing at the University of Colorado at Boulder put public faces on the pain caused by some of the largest tech companies in the United States. Cicilline and other lawmakers have sought to determine if federal antitrust law is sufficient to hold Silicon Valley leaders accountable — and whether changes to federal law are necessary to address anti-competitive concerns in search, smartphones, e-commerce and social networking.
“I think it’s clear there’s abuse in the marketplace and a need for action,” said Rep. Ken Buck (R-Colo.).
Four Facebook competitors are suing the social network for allegedly anticompetitive behavior. They’ve asked a judge to order Mark Zuckerberg to give up control of the company and force him to sell off Instagram and WhatsApp. (Robert Burnson / Bloomberg)
As seven University of Puerto Rico students prepare to go on trial in February for participating in a nonviolent protest more than two years ago, documents released to their defense attorneys reveal that Facebook granted the island’s Justice Department access to a trove of private information from student news publications. (Alleen Brown and Alice Speri / The Intercept)
Democratic candidates’ spending on Facebook ads shows how campaigns are plotting their way through the primary states. Since October, Pete Buttigieg has spent about a fifth of his overall Facebook budget on ads targeting voters in Iowa. Andrew Yang has spent more than 85 percent of his Facebook budget in Iowa and New Hampshire. (Nick Corasaniti and Quoctrung Bui / The New York Times)
Facebook took down a network of pages that were coordinating posts defending Robert F. Hyde, a figure who has become embroiled in the impeachment investigation. The pages described themselves as representing groups of supporters of President Trump. (Rebecca Ballhaus / The Wall Street Journal)
A Massachusetts judge ordered Facebook to turn over data about thousands of apps that may have mishandled its users’ personal information. The move was a clear rejection of the tech giant’s earlier attempts to withhold the key details from state investigators. (Tony Romm / The Washington Post)
Nationalist propaganda has been spreading on WhatsApp ahead of an upcoming election in Delhi. The propagandists appear to be targeting university students who oppose India’s new Citizenship Amendment Act, which is widely perceived to be anti-Muslim. (Anisha Sircar / Quartz)
A viral video titled “Truth From an Iranian,” which has amassed more than 10 million views across Facebook, Twitter, and YouTube, was created by a registered lobbyist who previously worked for a militia group fighting in a bitter civil war in Libya. The video praised the US drone strike that killed Iranian Gen. Qassem Soleimani. (Ryan Broderick and Jane Lytvynenko / BuzzFeed)
Joe Biden said in an interview last week that he wants to revoke one of the core protections of the internet: Section 230 of the Communication Decency Act. He appears to have deeply misunderstood what the law actually does. (Makena Kelly / The Verge)
Attorney General William Barr has intensified a long-running fight between law enforcement and technology companies over encrypted communications. Some FBI agents worry his forceful approach could sour valuable relationships they have fostered with tech companies. (Sadie Gurman, Dustin Volz and Tripp Mickle / The Wall Street Journal)
French President Emmanuel Macron and Donald Trump agreed to a truce in an ongoing digital tax dispute that impacts big tech companies. Paris offered to suspend down payments for this year’s digital tax and Washington promised to keep negotiating toward a solution rather than acting on a tariff threat. (Reuters)
Peter Thiel’s guiding philosophy is libertarianism with an abstract commitment to personal freedom but no particular affection for democracy, says Max Read. The PayPal co-founder and Facebook board member (and Clearview AI investor!) has wed himself to state power, but not because he wants to actually participate in the political process. (Max Read / Intelligencer)
The New York Times created a game to demonstrate how easy it is to give up personal information online. The only way to win the game is to hand over personal data. Relatable!
MediaReview wants to turn the vocabulary around manipulated photos and video into something structured. The proposed definitions allow images or videos to be “Authentic,” “MissingContext,” “Cropped,” “Transformed,” “Edited,” or “ImageMacro.” Sure, why not! (Joshua Benton / NiemanLab)
If we wanted media that was good for democratic societies, we’d need to build tools expressly designed for those goals, says Ethan Zuckerman, Director of the Center for Civic Media at MIT. Those tools probably won’t make money, and won’t challenge Facebook’s dominance—and that’s okay. (Ethan Zuckerman / Medium)
? Researchers are challenging the widespread belief that screens are responsible for broad societal problems like the rising rates of anxiety and sleep deprivation among teenagers. In most cases, they say, the phone is just a mirror that reveals the problems a child would have even without the phone. Nathaniel Popper at The New York Times explains the findings:
The researchers worry that the focus on keeping children away from screens is making it hard to have more productive conversations about topics like how to make phones more useful for low-income people, who tend to use them more, or how to protect the privacy of teenagers who share their lives online.
“Many of the people who are terrifying kids about screens, they have hit a vein of attention from society and they are going to ride that. But that is super bad for society,” said Andrew Przybylski, the director of research at the Oxford Internet Institute, who has published several studies on the topic.
Facebook plans to hire 1,000 people in London this year, in roles like product development and safety. The company is continuing to grow its biggest engineering center outside the US despite fears about Brexit. (Paul Sandle and Elizabeth Howcroft / Reuters)
Facebook gave Oculus Go a permanent $50 price cut. (Sam Byford / The Verge)
Adam Mosseri, the head of Instagram, is the person in charge of Project Daisy — the photo sharing app’s initiative to take away likes on the platform. This profile reveals a current tension of Mosseri’s reign at Instagram: the man who is working to mostly eliminate likes really wants to be liked. (Amy Chozick / The New York Times)
Countless purveyors of bootleg THC vape cartridges are hawking their wares in plain sight on Instagram and Facebook. These illegal operators appear to be doing so with impunity, using the ease and anonymity of Instagram to reach a massive audience of young people who vape. (Conor Ferguson, Cynthia McFadden and Rich Schapiro / NBC)
Jack Dorsey asked Elon Musk how to fix Twitter during a video call last week. Musk said Twitter should identify start by identifying and labeling bots. (Kurt Wagner / Bloomberg)
Instagram is removing the orange IGTV button from its home page. Only 1 percent of Instagram users have downloaded the standalone IGTV app in the 18 months since it launched. (Josh Constine / TechCrunch)
Instagram is democratizing who can succeed in the dance industry, allowing nontraditional talent to break in. It’s no longer just about having the right look or connections. (Makeda Easter / Los Angeles Times)
Instagram has also revolutionized the way tattoo artists grow their businesses. Many artists estimate that more than 70 percent of their clients now come from the photo-sharing app. (Salvador Rodriguez / CNBC)
Snap CEO Evan Spiegel says TikTok could become bigger than Instagram. App intelligence company App Annie ranked TikTok just behind Instagram in terms of monthly active users in 2019. (Hailey Waller / Bloomberg)
TikTok’s parent company, ByteDance, is preparing a major push into games, the mobile arena’s most lucrative market. It’s a realm Tencent has dominated for over a decade. (Zheping Huang / Bloomberg)
More than 70,000 photos of Tinder users are being shared by members of an online cyber-crime forum, raising concerns about the potential for abusive use of the photos. Ominously, only women appear to have been targeted. (Dell Cameron and Shoshana Wodinsky / Gizmodo)
A new report suggests Bumble, the “by women, for women” dating app that is trying to keep women safer online, has little strategy for how to achieve its lofty goals. It also struggles with a cliquey internal culture, according to some employees.
Facebook apologized after its platform translated Xi Jinping, the name of the Chinese leader, as “Mr. Shithole” in English. The mistranslation caught the company’s attention when Daw Aung San Suu Kyi, the de facto civilian leader of Myanmar, wrote on her official Facebook page about Mr. Xi’s two-day visit to her country.
Xi is a brutal dictator who runs concentration camps that reportedly house more than 1 million people whose only crime is being Muslim. So I’d say “Mr. Shithole” suits him just fine.
Talk to us
Send us tips, comments, questions, and your Clearview results: email@example.com and firstname.lastname@example.org.