Ludas Kanapienis, CEO and co-founder of Ondato
Controversy surrounded the inclusion in May of an Online Safety Bill in the Queen’s Speech. This is the traditional introduction to the UK Parliament’s legislative program, probably the UK Government’s final order of business before it is required to seek re-election in two years’ time.
The cause of contention is the Bill’s apparent failure to adequately tackle one of the principal issues of online safety — the ability of users to remain anonymous. As long as this remains possible, argue the Bill’s critics, there is little chance of cutting the chronic rise in online abuse and fraud.
There is no longer any technological reason for further delay in tackling this crisis and, in my opinion, any legal and moral arguments against enforcement are growing increasingly untenable.
The dark side of social media
Social media is deeply entrenched in our lives. It is now used by 4.62 billion people around the world. That’s nearly 60 percent of all the people on the planet. The average time spent using social media is just under 2.5 hours per day, with a total of 12.5 trillion hours spent online annually.
But for all the attraction and undoubted benefits of social media, there is a dark side too. The ability to remain anonymous or to take on a false identity opens up opportunities for hate crime, fraud, and fake news.
Facebook puts the percentage of hate speech on its platform at 0.11 percent. Which means that for every 1,000 times a piece of content is viewed on the platform, one of them will contain hateful content. The issue of hate speech and how it trickles into our real world is a pressing issue, in the light of tragedies such as the 2019 Christchurch Mosque shootings in New Zealand, the 2021 Spa shooting in Atlanta, USA, the Plymouth Shooting in the UK in August 2021, and many other incidents where perpetrators had a history of spreading hate online against the personal identities of the groups that they attacked in real life.
This correlation between online hate speech and physical crimes is more than conjecture. A 2019 study by Cardiff University’s HateLab found that an increase in hate speech on social media platforms can lead to an increase in the number of physical-world crimes against minorities. The study compared London crime data with Twitter data and found that the number of crimes aggravated by race and religion in a location rose when the number of “hate tweets” made from that location increased.
This research backs up a similar 2018 study in Germany, which found that negative online comments about refugees may have increased the rate of hate crimes. One more study by NYU focused on discriminatory tweets related to race, national origin, and ethnicity, and found similar results.
Fraud and scams are other negative consequences of how we currently choose to allow anonymity on social media. In 2020, according to Action Fraud, romance fraud in the UK alone rose by 20 percent and led to more than £68 million (US$83.8 million) being paid to criminals. Typically starting on Facebook or Instagram, these scams begin with a seemingly harmless friend request from a stranger, followed by pleasant conversation, and, eventually, a request for money. UK victims lost an average of £7,850 each ($9,674). In the U.S., latest data from the US Federal Trade Commission (FTC) shows more than 95,000 people losing money to social networks-related fraud in 2021, with losses up over 18 times in the last four years.
Fake news is an ongoing concern too. Following Russia’s invasion of Ukraine, a propaganda war also emerged on social media. Several social media platforms, including Facebook, Twitter, and YouTube, announced the removal of fraudulent accounts propagating misinformation. Fabricated or hacked accounts were allegedly posting anti-Ukraine misinformation, with the strong implication they were managed by centralized sources linked to Russia and Belarus.
Calls for an end to anonymity
Faced with these harms, there are growing calls for change. In the UK, the main football organizations have written to Twitter and Facebook calling for an ‘improved verification process’ for all users as a way to combat racist abuse against players and officials. And writing in the influential magazine, The Atlantic, the US social psychologist Jonathan Haidt makes similar calls for identity verification in order to improve the public political sphere.
Politicians, who are often the targets of online hate, are speaking out too. In the UK, the Black MP Diane Abbott, who has experienced extensive online abuse, has argued that ending anonymity is necessary to catch those responsible.
Governments are starting to take action
With politicians themselves experiencing the pain of social media hate, governments are starting to propose legislation. The Australian federal government plans to crack down on “bots and bigots and trolls” by introducing legislation to parliament that will require social media companies to collect the details of all users. Courts will be allowed to force companies to hand over the identities of users to aid defamation cases, bringing order to what the Prime Minister has described as the online “Wild West.”
Australia is not the only country considering taking action. French Senator Alain Cadec has introduced legislation to create a Digital Identity Control Authority. If enacted, the legislation would enforce the transfer of social media users’ IDs to this new authority, which would then send the social network “a non-nominative identifier,” attesting to the user’s identity but not revealing it in order to protect their privacy. The new authority could later reveal Internet users’ identities at the request of a responsible court, and only to punish a statement that violates the law.
At the European Union level, the European Digital Services Act (DSA) is well on its way to adoption. It will apply to social networks with the potential to reach more than 10 percent of the 450 million consumers in Europe. The DSA builds on the e-Commerce Directive (ED) to address new challenges that have evolved since the ED’s adoption 20 years ago. New obligations proposed in the DSA include more detailed procedures aimed at effectively tackling and removing illegal content online, and a KYC obligation, with fines of up to 6 percent of global annual turnover levied for breaches.
The UK Government’s proposals seem surprisingly timid
The UK Government has very recently published details of its Online Safety Bill. This includes a clear rejection of calls for a blanket end to online anonymity on the basis that the ability to hide identity underpins people’s fundamental right to express themselves and access information online in a liberal democracy: “The government recognises concerns linked to anonymity online… However, restricting all users’ right to anonymity, by introducing compulsory user verification for social media, could disproportionately impact users who rely on anonymity to protect their identity. These users include young people exploring their gender or sexual identity, whistleblowers, journalists’ sources and victims of abuse.”
These are valid points, but the Government seems to be underplaying the societal and personal damage caused by online trolling and misinformation. It is also ignoring the mechanisms being proposed — in France, for example — that balance out the requirement to both preserve the benefits of anonymity while protecting individuals and society from abuse.
Instead, the Bill seeks to target harmful online activity. All companies in scope will be required to assess whether children are likely to access their services, and if so, use age assurance or age verification technologies to prevent children from accessing services which pose “the highest risk of harm”.
And it is favoring the opt-in verification model proposed by MP, Siobhan Baillie, who is a member of the governing Conservative Party, in her Social Media Platforms (Identity Verification) Bill, introduced last year under the provisions of the House of Parliament’s private member’s bills.
Like her proposed law, the new Bill proposes people are given the chance to verify their accounts, while platforms would be required to offer options to limit or block interaction with unverified users. Those wishing to remain anonymous would be allowed to use a pseudonym for their social media handle and choose whether they want to have their personal details verified or not.
The “Blue Tick” option does not go far enough
Many social media platforms already offer a way for some people to verify their accounts and flag these verified accounts with a symbol. This is the so-called ‘blue tick’ used on both Twitter and Instagram, created in response to fake accounts of notable people and organizations.
It is possible that platforms will choose to use their existing blue tick systems for regular users as a way of complying with the UK legislation but they may conclude that this is not the right tool and run parallel systems.
However, even this does not go far enough, in my opinion. The existence of verified blue tick accounts implies the existence of unverified accounts that are still at liberty to con people. Why would we allow that, when perfectly reasonable ways exist to stop it?
The counter argument is that there is no proven link between anonymity and abusive content. Twitter itself claims that 99 percent of the accounts it has banned, for more than 1,600 tweets of abuse, were not anonymous.
If that data is correct, then the necessary course of correction is mandatory ID verification plus stricter legal definition of what is permissible. However, there is no explicit aim in the UK’s Bill to introduce new police powers: “The government is also working with law enforcement to review whether the current powers are sufficient to tackle illegal anonymous abuse online. The outcome of that work will inform the government’s future position in relation to illegal anonymous online abuse.”
In other words, the enforcement powers that are currently failing to stop the rise in online hate are not deemed to be inadequate and will likely form the basis of the new regime.
The only way to solve these issues is to utilize digital identity verification across the board — with the ability for IDs to be masked unless a legally defined transgression takes place – to ensure that the individual who establishes a new account is not impersonating anyone else.
The commercial market is getting the message. The most popular dating app, Tinder, is rolling out voluntary customer authentication including photo verification and face-to-face video. These capabilities will give users confidence that their matches are genuine, allowing them to prevent unpleasant situations or even romance scams.
But it is a voluntary program. Including human verification as an additional, mandatory protection level for all social media users, together with enforceable police powers, would drastically cut criminal offenses on social networks. KYC compliance is a well-understood, mature technology that ensures customer confidentiality. It is effective and fraud-proof. It is time it was made a legal requirement.
About the author
Ludas Kanapienis is CEO and co-founder of Ondato.
DISCLAIMER: Biometric Update’s Industry Insights are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Biometric Update.
digital identity | fraud prevention | identity verification | KYC | legislation | Ondato | Online Safety Bill | privacy | regulation | social media