Can we create a moral metaverse? | The metaverse | #daitngscams | #lovescams


Psychotherapist Nina Jane Patel had been on Facebook’s Horizon Venues for less than a minute when her avatar was mobbed by a group of males. The attackers proceeded to “virtually gang-rape” her character, snapping in-game pictures as mementos. Patel froze in shock before desperately trying to free her virtual self – whom she had styled to resemble her real-life blond hair, freckles and business casual attire.

“Don’t pretend you didn’t love it,” the human voices of the attackers jeered through her headset as she ran away, “go rub yourself off to the photo.”

The metaverse – the blurrily defined term for the next generation of immersive virtual reality technologies – is still in its infancy. But even with crude graphics and sometimes glitchy gameplay, an experience like this can trigger a deeply rooted panic response. “The fidelity is such that it felt very real,” Patel, who is also co-founder of children’s metaverse company Kabuni, tells the Observer. “Physiologically, I responded in that fight or flight or freeze mode.”

Emerging reports depict a metaverse more akin to the lawless chat rooms that dominated the early internet than the moderated and algorithmically pruned digital gardens we mostly occupy today. A recent Channel 4 Dispatches investigation documented metaverses rife with hate speech, sexual harassment, paedophilia, and avatars simulating sex in spaces accessible to children.

Research predating the metaverse hype finds that these experiences are far from uncommon. A 2018 study by virtual reality research agency The Extended Mind found that 36% of males and 49% of females who regularly used VR technologies reported having experienced sexual harassment.

Facebook, which changed its name to Meta last year to signal its investment in this space, publicised its decision to introduce a “personal boundary” feature into its metaverse products shortly after Patel’s experience hit the headlines. This is a virtual social distance function that characters can trigger to keep others at arm’s length, like a forcefield.

For her Dispatches documentary about the metaverse, Yinka Bokinni posed as a 13-year-old and encountered racial and sexual abuse.
For her Dispatches documentary about the metaverse, Yinka Bokinni posed as a 13-year-old and encountered racial and sexual abuse. Photograph: Channel 4

“We want everyone using our products to have a good experience and easily find the tools that can help in situations like these, so we can investigate and take action,” said Bill Stillwell, product manager, VR integrity at Meta.

The metaverse pitch says that one day we will interact with the internet primarily through a virtual reality headset, where sharply rendered and convincingly 3D environments will blur the boundaries of the physical and virtual worlds. Virtual concerts and fashion shows have already attracted flocks of digital attendees, and brands and celebrities are buying up plots of land in the metaverse, with single sales reaching into the millions of dollars – prompting concerns over a metaverse real estate bubble.

Technology companies are working on ensuring that one day, these worlds feel as real as possible. Facebook announced last November that it was developing a haptic vibrating glove to help mimic the feeling of handling objects; Spanish startup OWO has created a sensor-packed jacket to allow users to feel in-game hugs and gunshots; and Japanese tech company H2L is working on simulating pain in the metaverse, including the sensation of a bird pecking your arm.

Billions of dollars are pouring into the space. Besides Meta, Microsoft, which ​​sells its mixed-reality HoloLens headsets, is working on metaverse-related software, while Apple is developing an augmented reality headset. Video-game companies such as Roblox and Epic Games, and decentralised, blockchain-based metaverses such as Sandbox, Decentraland and Upland are also keen to grab a slice of the future. CitiGroup’s investment bank predicts that the metaverse economy will balloon to $13tn by 2030.

The regular internet is plagued by harassment, hate speech and illegal content – and as early reports make clear, none of this will disappear in the metaverse. “If something is possible to do, someone will do it,” says Lucy Sparrow, a PhD researcher in computing and information systems at the University of Melbourne, who has studied morality in multiplayer video games. “People can really be quite creative in the way that they use, or abuse, technology.”

The metaverse could actually magnify some of these harms. David J Chalmers is professor of philosophy and neural science at New York University and the author of Reality+… Virtual Worlds and the Problems of Philosophy. According to him, “bodily harassment” directed against an avatar is generally experienced as more traumatic than verbal harassment on traditional social media platforms. “That embodied version of social reality makes it much more on a par with physical reality,” he says.

Prof David J Chalmers argues that ‘bodily’ harrassment in the metaverse can be more traumatic than verbal abuse on social media.
Prof David J Chalmers argues that the ‘bodily’ harrassment in the metaverse can be more traumatic than the verbal abuse on social media. Photograph: TED/YouTube

With this brave new world come emerging ethical, legal and philosophical questions. How should the regulatory environment evolve to deal with the metaverse? Can metaverse platforms rely on the safety protocols of their predecessors, or are entirely new approaches warranted? And will virtual punishments be sufficient to deter bad actors?

Stepping from a social media platform such as Facebook into the metaverse means a shift from moderating content to moderating behaviour. Doing the latter “at any meaningful scale is practically impossible”, admitted Facebook’s chief technology officer Andrew Bosworth in a leaked internal memo last November.

Bosworth’s memo suggested that bad actors kicked out of the metaverse could be blocked across all Facebook-owned platforms, even if they used multiple virtual avatars. But to be really effective, this approach would rely on accounts requiring ID to be set up.

Facebook said last year that it is exploring how to apply AI moderation to the metaverse, but hasn’t built anything yet. Automated content moderation is used by existing social media platforms to help manage vast amounts of users and material, but still suffers from false positives – primarily due to an inability to understand context – as well as failing to catch content that genuinely violates policies.

“AI still isn’t clever enough to intercept real-time audio streams and determine, with accuracy, whether someone is being offensive,” argues ​​professor of digital rights at Bournemouth University, Andy Phippen. “And while there might be some scope for human moderation, monitoring of all real-time online spaces would be impossibly resource-intensive.”

There are some examples of when digital-world crime has resulted in real-world punishment. In 2012, the Dutch supreme court ruled on a case involving the theft of a digital amulet and sword in the online multiplayer game Runescape. Two players who robbed another at knifepoint were sentenced to real-world community service, with the judge saying that although the stolen objects had no material value, their worth derived from the time and effort spent obtaining them.

Adjudicating digital transgressions in real-life courts doesn’t exactly seem scalable, but legal experts believe that if the metaverse becomes as important as tech CEOs say it will, we could increasingly see real-world legal frameworks applied to these spaces. Lecturer in bio-law at Brunel University, London, Pin Lean Lau, says that although some novel legal challenges may emerge in the metaverse, for example questions about “the avatar’s legal personality, or the ownership of virtual property and whether this might be used as collateral for loans … we may not completely need to reinvent the wheel.”

However, there are those who hope that the metaverse might offer an opportunity to move beyond the reactive enforcement model that dominates the current crop of online social spaces. Sparrow, for one, disapproves of metaverse companies’ current emphasis on individual responsibility, where it’s the victim that must trigger a safety response in the face of an attack. Instead, she asks, “how can we be proactive in creating a community environment that promotes more positive exchanges?”

No one wants to live in a virtual police state, and there’s a growing sense that enforcement should be balanced by promoting prosocial behaviour. Some suggestions put forward by industry body XR Association, which comprises Google, Microsoft, Oculus, Vive and Sony Interactive Entertainment, include rewarding altruism and empathy, and celebrating positive collective behaviour.

Co-founder of the gaming research company Quantic Foundry, Nick Yee, has highlighted the example of multiplayer game EverQuest, where players who had died in the game were forced to travel back to the location of their deaths and reclaim lost belongings. Yee argues that this design feature helped to encourage altruistic behaviour, because players had to solicit help from other players in retrieving the items, helping to foster camaraderie and promote positive interactions.

Patel advocates looking beyond enforcement mechanisms when thinking about how to regulate the metaverse. She proposes examining the harmful behaviour of some people in digital environments and getting “curious about what it is that’s making them behave this way”.

The top-down governance model of present-day social media platforms might be shaken up too, if decentralised platforms continue to play a role in the metaverse ecosystem. Such models have been tried before. The online forum platform Reddit, for example, relies partly on community moderators to police discussion groups. An early multiplayer children’s game, the Disney-owned Club Penguin, pioneered a gamified network of “secret agent” informants, who kept a watchful eye on other players.

A 2019 paper by researchers working with Facebook-owned Oculus VR indicates that the company is exploring community-driven moderation initiatives in its VR applications as a means of countering the problems of top-down governance.

Mark Zuckerberg’s avatar (left) hangs out in the metaverse during the conference in which Facebook was rebranded as Meta in October last year.
Mark Zuckerberg’s avatar (left) hangs out in the metaverse during the conference in which Facebook was rebranded as Meta in October last year. Photograph: Facebook/Reuters

In many ways, the solutions tech companies have come up with to tackle metaverse harms echo the inadequate strategies they’ve employed on the internet – and could be described as a sop to avoid regulation.

However, some of the new laws being enacted to temper social media may well be applied to the metaverse. Government legislation such as the EU’s newly rolled out Digital Services Act – which imposes harsh penalties on social media companies if they don’t promptly remove illegal content – and the UK’s still-incubating online harms bill could play a role in the development of safety standards in the metaverse. Facebook’s metaverse ventures are already falling foul of regulators over safety. Earlier this year, the UK’s data watchdog, the Information Commissioner’s Office, sought talks with Facebook about the lack of parental controls on its popular Oculus Quest 2 virtual reality headset.

But there are still unresolved legal questions about how to govern virtual bodies that go beyond the scope of the current web – such as how rules around national jurisdiction apply to a virtual world, and whether an avatar might one day gain the legal status necessary for it to be sued. The highly speculative nature of the space right now means these questions are far from being answered.

“In the near term, I suspect the laws of the metaverse are by and large going to derive from the laws of physical countries,” says Chalmers. But in the long term, “it’s possible that virtual worlds are going to become more like autonomous societies in their own right, with their own principles.”



Click Here For The Original Source

. . . . . . .