Stop Saying Ukraine Is Winning the Information War | #datingscams | #lovescams


More than a month on from Russia’s invasion of Ukraine, suggesting that the wheels have fallen off Vladimir Putin’s propaganda machine has become commonplace. Russia’s playbook is outdated and has failed to adapt; Moscow has been stunned either by Ukrainian President Volodymyr Zelensky’s great skill as a media operator or by the viral ferocity of Kyiv’s own digital fighters.

As the researcher Sinan Aral wrote in The Washington Post, “Ukraine and its partisans are running circles around Putin and his propagandists in the battle for hearts and minds, both in Ukraine and abroad.” Even Russia’s lurch back into Soviet-style information control seems to be nothing but a retreat from the gleeful, postmodern, fact-defying dance of digital propaganda in which it had been so masterful. My personal social-media feeds stand as testament to how each of these observations might, individually, be true: They feature wall-to-wall Zelensky, Arnold Schwarzenegger, and farmers towing tanks. I know absolutely no one who thinks the invasion is anything but an outrage.

Despite this, it’s far too early to declare information victory. If anything, this apparent consensus—that Ukraine has won the online war—might be obscuring where battles over the invasion are really raging.

My pro-Ukrainian online world was punctured on March 2, when I saw two hashtags trending on Twitter: #IStandWithPutin and #IStandWithRussia. Very quickly, disinformation researchers began to see suspicious patterns associated with the hashtags, arguing that both bots and “engagement farming” were being used. A deep dive on the profile picture used by one account propagating the hashtags led to a Polish Facebook group dedicated to dating scams. At least in part, the early signs indicated that a deliberate, if hidden, effort was under way to make these hashtags trend.

The pro-invasion hashtags were enough to make my colleagues and I take notice. By March 9, just under 10,000 Twitter accounts had shared one of the hashtags at least five times, an especially engaged, active “core.” So we decided to do our own research into these accounts: Who was behind them? And what were they doing?

The way we typically do this on Twitter is by placing accounts on a map based on who they follow, retweet, or like—so-called engagement graphs. These allow researchers to determine how genuine a set of accounts might be, and whether they seem to be working in some measure of coordination. But a new generation of powerful models has emerged, allowing us to go further, analyzing how these accounts use language in a much more general sense—turns of phrase, hashtags, and really everything else, too. This opens up new opportunities to understand how accounts interact on social-media platforms.

We fed the last 200 tweets from each of the 10,000 accounts into these new models to create a linguistic fingerprint of the users, and then plotted the accounts on a graph. This might sound convoluted, and in a sense it is (you can read our 38-page white paper if you’d like), but what this process really does is put Twitter accounts that tend to use similar language close together on a map. The power here is in turning linguistic similarity into something not only measurable, but visible. And language is what Twitter is all about.

From there, we compiled a roster of randomly selected accounts from across our new map and delved into them, to try to draw out what set each of the different clusters of accounts apart. What struck us immediately was how clearly each cluster seemed to relate to geography—to the purported national identities and languages that the accounts used.

There was a dense knot of accounts identified as Indian that largely retweeted a stream of messaging in English and Hindi supporting Prime Minister Narendra Modi and his Hindu nationalist Bharatiya Janata Party. Another group used Urdu, Sindhi, and Farsi, with users primarily identifying as Iranian or Pakistani. One node was ostensibly from South Africa but included Ghanaian, Nigerian, and Kenyan users talking about public health, fuel shortages in Nigeria, and former South African President Jacob Zuma. A final cluster was the only one not characterized by language or geography. Accounts in this grouping sent the fewest tweets and had the fewest followers; many had been created either on the day of Russia’s invasion or on March 2, the day of a key United Nations vote condemning the invasion—and when I saw those hashtags suddenly trend.

Although each cluster was linguistically different from the rest, they had patterns in common. All saw a small uptick in messages on the day of the invasion, and then a very sharp increase on March 2 and 3. And all but one (the South African cluster) were doing the same thing: frenetic amplification. Seventy to 80 percent of the accounts’ activity was retweeting others, and on the day of the UN vote, many published a parade of pro-invasion memes.

The memes pushed vivid anti-colonial and anti-Western imagery mixed with Putin strongman motifs and solidarity among the BRICS: Brazil, Russia, India, China, and South Africa. Some applauded Russia’s great friendship toward India or Putin’s apparent role in African liberation movements, but many were really about the West, its own seeming hypocrisy, and the alleged aggression of NATO expansion.

This research casts a small and admittedly imperfect light on what might be happening. We focused on Twitter, and influence operations can use a number of parallel channels. These are our impressions as researchers; others looking at the same data might have found different things. I can point to suspicious patterns, but little is definite in this world, and nothing in our analysis lets me pin this unusual social-media activity directly on the Russian state.

Still, the early data are revealing, the activity suspicious. These accounts came alive for UN votes on the invasion, propelled in part, I suspect, by one or more “paid to engage” networks—groups of accounts that will shift their Twitter usage en masse to deliver retweets for a fee. But real people (we are unsure precisely how many) are also helping the hashtags trend. That interplay between organic and inauthentic activity is the most important subtlety of this research. It also gives us our most important conclusion.

Insofar as this was a coordinated campaign, we saw little attempt to address (or impersonate) Western social-media users. To the extent that we saw real people using the hashtag, very few were from the West.

Look beyond the West, and the information war feels a lot different. “We’ve seen many suspicious TikTok accounts parroting Russian ideology or valorizing Russian aggression in Southeast Asian languages such as Malay and Indonesian,” Ng Wei Kai, a journalist for Singapore’s The Straits Times newspaper, told me. “Comments sections on news accounts [are] flooded with pro-Russian views. Much of the content made in non-English languages also takes a mocking or warning tone about Singapore’s decision [to sanction Russia], as if to say, Don’t be like them; there will be consequences for the sanctions.” In India, as the journalist Tushar Dhara notes, the level of genuine sympathy for Russia can be striking. “There is genuine warmth for Russia and the Soviet Union, for its diplomatic and military support to India going back decades,” Dhara told me.

Zelensky’s great success in the information war has undeniably been to couch the conflict as one of Russia against not just Ukraine, but the West. That has helped him win an array of fans across Europe and North America, among both politicians and ordinary voters. But that success, the very reason that we in the West think Ukraine is winning the information war, is also the very reason it isn’t.

Disinformation campaigns are far more effective when they have a powerful truth at their core and use that truth to guide discussion. The blunt reality is that in many parts of the world, antipathy for the West is deep and sympathy for Russia is real. It is in these contexts where I’d expect influence operations to be targeted—and to work.

A mistake we in the West too often make is to suppose that our information spaces—English, French, and German Twitter and Facebook, for example—are far more universal than they are. Remainers the day before Britain’s Brexit vote, and Democrats the day before Donald Trump’s 2016 election victory, didn’t simply feel as though they were beating the opposition; they didn’t think there was an opposition.

We’re in danger of making that same mistake over Russia’s invasion of Ukraine. The fact that we don’t see information warfare doesn’t mean it isn’t happening, and it doesn’t mean we’ve won. It might just mean that ours is not the battleground on which it’s being fought.





Click Here For Original Source.

. . . . . . .