The metaverse is still effectively a concept: a vision of a single virtual universe composed of interoperable and international virtual and mixed reality worlds, where people interact and experience activities they might currently do in the physical world. These experiences won’t be limited to online gaming, but could take the form of anything from a shopping trip with a friend living on another continent to a meeting with a client in your virtual office.
There are already numerous entry points to the metaverse, the largest (by userbase) of which include the Decentraland and Roblox metaverses, the Sandbox, and the Hyperverse. Facebook and Instagram’s parent company Meta has launched its own version, Horizon Worlds, which has exploded in popularity since its launch. However, none of these are interoperable, so they are tackling the challenges of protecting users from online harm, as well as cyber-enabled and cyber-dependent crime, independently.
With minimal global regulation and divergent ideas between platforms and governments on what constitutes acceptable use of social media, these challenges will only multiply as platforms seek to become interoperable. In the context of the past few months, which have seen TikTok take centre stage in US-China politics and Meta banned in Russia under “extremism law”, reaching any form of unilateral agreement on acceptable use looks like a distant pipe dream that could limit the expansion of a universal metaverse for years to come.
Moderation challenges for the metaverse
One of the key safety issues in the metaverse is that of content moderation. Meta CTO Andrew Bosworth has acknowledged that harassment in virtual reality is an “existential threat” to the company, and that content moderation in the metaverse “at any meaningful scale” will be “practically impossible”.
Disinformation and extremism in the metaverse
Similarly, the task of tracking and removing disinformation will be more complex in the metaverse, with several investigations already revealing issues on today’s fledgling platforms. To test content moderation capabilities in Meta’s Horizon Worlds, Buzzfeed investigators created a world on the platform called the Qniverse (a naming convention typical of QAnon conspiracy content creators), and deliberately filled it with banned disinformation slogans and content. When they reported the content, moderators found no issues. The platform was eventually removed only when Buzzfeed contacted Meta’s PR team.
Information scientist Rand Waltzman has warned that targeted posts such as those designed to influence the outcomes of electoral campaigns could be “supercharged” in the metaverse. For example, deepfake technology can be used by a speaker to subtly take on audience members’ features, with the psychological impact of subliminally increasing their perceived trustworthiness to that audience.
Perhaps even more alarmingly, violent extremists could use the metaverse to coordinate, plan and execute acts of terrorism, as well as for recruitment. EU Observer has reported that the metaverse holds significant potential to exert influence in new ways, through threat, coercion and fear.
Child safety and sexual harassment in the metaverse
Moderation failures in the virtual world, particularly with regards to child safety and sexual harassment, have damaging consequences on the real lives of those affected. Both issues have been the subject of measures taken by the new metaverse platforms to protect their users, including child safety and personal boundary features in Meta’s VR realm.
Despite such efforts, research by the Centre for Countering Digital Hate in 2021 found that extreme sexual content is common in the metaverse, with several recorded instances of verbal sexual violence threats found in a short monitoring period, including against children. Although Meta’s personal boundary feature may prevent unwanted “physical” interactions in the virtual world, it will do little to reduce verbal abuse.
The potential for financial crime
According to research by blockchain analysis company Elliptic, metaverse crime could include money laundering, scams, sanctions and terrorist financing. Volumes of economic activity within the metaverse are already significant: sales of cryptoassets across the platforms Decentraland, The Sandbox, Cryptovoxels and Somnium Space surpassed USD $500 million in 2021, and Citibank has predicted that the metaverse will be worth up to USD $13 trillion by 2030.
Such growth presents an opportunity for criminals to launder illicit funds. Money launderers could already be exploiting opportunities provided by large sales in digital real estate and high-value wearables, as minimal KYC checks are required. In time, sanctioned actors and terrorists could look to move fundraising activities to the metaverse, exploiting the largely unregulated space.
Sophisticated crypto scams are also likely to proliferate on the platform. Fake metaverses could be announced as a subterfuge to access victims’ login details, while avatars made to impersonate colleagues or close friends could manipulate victims into sharing sensitive information.
To counter these threats, financial regulators are formulating plans to bring cryptocurrencies under regulatory control. While there will be obstacles to overcome in reconciling approaches between different jurisdictions, AML controls could be transposed to the metaverse – for example, through adopting it into existing regulatory structures and enforcing compliance screening into secondary marketplaces and crypto exchanges.
The complexities of mitigating threats posed by the metaverse
The threats posed by the metaverse are complex, and unsurprisingly there is no simple solution to reducing them. Removing anonymity from metaverse platforms could stop criminal or abusive actors from hiding behind anonymous avatars. However, the identity verification checks required to carry this out require users to trust the platforms with sensitive personal data, which is unlikely to be popular in the wake of the many data processing scandals that have plagued social media companies in the past five years.
Instead, monitoring behaviour on the metaverse may be the key to protecting its users. But without monitoring all interactions in real time, it will be difficult to swiftly identify and put a stop to any illicit or harmful activity. Such an extensive monitoring approach has practical, technical and ethical challenges. Finding the balance between privacy, safety, autonomy and moderation will be crucial.
Mainstream social media platforms have struggled with content moderation for years. It is almost impossible for human moderators to check the millions of posts sent to them daily for review, and the psychological effects on moderators of repeatedly witnessing violent and abusive content are well-documented. Attempts to replace human moderators with AI are unlikely to present an acceptable solution in the next few years: only last month, Google blocked the launch of Donald Trump’s Truth Social from its Play Store over concerns surrounding the efficacy of its AI-facilitated content moderation.
Science fiction writer Neal Stephenson, who originally came up the concept of the metaverse in a dystopian novel, recently told the FT that whether the metaverse is a good or bad thing will depend on its development and use. As developers forge ahead, we are entering unchartered territory. There is certainly room for cautious optimism, as policymakers and metaverse creators are taking steps to reconcile their distinct ambitions in the sphere. But there is a long way to go before metaverse users will be truly safe from online harm in the new virtual world.