The metaverse is still effectively a concept: a vision of a single virtual universe composed of interoperable and international virtual and mixed reality worlds, where people interact and experience activities they might currently do in the physical world. These experiences won’t be limited to online gaming, but could take the form of anything from a shopping trip with a friend living on another continent to a meeting with a client in your virtual office.
There are already numerous entry points to the metaverse, the largest (by userbase) of which include the Decentraland and Roblox metaverses, the Sandbox, and the Hyperverse. Facebook and Instagram’s parent company Meta has launched its own version, Horizon Worlds, which has exploded in popularity since its launch. However, none of these are interoperable, so they are tackling the challenges of protecting users from online harm, as well as cyber-enabled and cyber-dependent crime, independently.
With minimal global regulation and divergent ideas between platforms and governments on what constitutes acceptable use of social media, these challenges will only multiply as platforms seek to become interoperable. In the context of the past few months, which have seen TikTok take centre stage in US-China politics and Meta banned in Russia under “extremism law”, reaching any form of unilateral agreement on acceptable use looks like a distant pipe dream that could limit the expansion of a universal metaverse for years to come.
Moderation challenges for the metaverse
One of the key safety issues in the metaverse is that of content moderation. Meta CTO Andrew Bosworth has acknowledged that harassment in virtual reality is an “existential threat” to the company, and that content moderation in the metaverse “at any meaningful scale” will be “practically impossible”.
Investigations into existing metaverse platforms have revealed fertile ground for bad actors to exploit. This is unsurprising, given that social media platforms have long struggled with monitoring and moderating harmful activity. Hate-filled accounts harassing individuals, encouraging doxxing and propagating damaging and false information often remain on social media even following complaints and in instances where they clearly breach terms of use. These problems will only be exacerbated in the more complex metaverse, where moderating harmful activity in virtual spaces and in real time looks set to be a major challenge.
Disinformation and extremism in the metaverse
Similarly, the task of tracking and removing disinformation will be more complex in the metaverse, with several investigations already revealing issues on today’s fledgling platforms. To test content moderation capabilities in Meta’s Horizon Worlds, Buzzfeed investigators created a world on the platform called the Qniverse (a naming convention typical of QAnon conspiracy content creators), and deliberately filled it with banned disinformation slogans and content. When they reported the content, moderators found no issues. The platform was eventually removed only when Buzzfeed contacted Meta’s PR team.
Information scientist Rand Waltzman has warned that targeted posts such as those designed to influence the outcomes of electoral campaigns could be “supercharged” in the metaverse. For example, deepfake technology can be used by a speaker to subtly take on audience members’ features, with the psychological impact of subliminally increasing their perceived trustworthiness to that audience.
Perhaps even more alarmingly, violent extremists could use the metaverse to coordinate, plan and execute acts of terrorism, as well as for recruitment. EU Observer has reported that the metaverse holds significant potential to exert influence in new ways, through threat, coercion and fear.
Child safety and sexual harassment in the metaverse
Moderation failures in the virtual world, particularly with regards to child safety and sexual harassment, have damaging consequences on the real lives of those affected. Both issues have been the subject of measures taken by the new metaverse platforms to protect their users, including child safety and personal boundary features in Meta’s VR realm.
Despite such efforts, research by the Centre for Countering Digital Hate in 2021 found that extreme sexual content is common in the metaverse, with several recorded instances of verbal sexual violence threats found in a short monitoring period, including against children. Although Meta’s personal boundary feature may prevent unwanted “physical” interactions in the virtual world, it will do little to reduce verbal abuse.
The potential for financial crime
According to research by blockchain analysis company Elliptic, metaverse crime could include money laundering, scams, sanctions and terrorist financing. Volumes of economic activity within the metaverse are already significant: sales of cryptoassets across the platforms Decentraland, The Sandbox, Cryptovoxels and Somnium Space surpassed USD $500 million in 2021, and Citibank has predicted that the metaverse will be worth up to USD $13 trillion by 2030.
Such growth presents an opportunity for criminals to launder illicit funds. Money launderers could already be exploiting opportunities provided by large sales in digital real estate and high-value wearables, as minimal KYC checks are required. In time, sanctioned actors and terrorists could look to move fundraising activities to the metaverse, exploiting the largely unregulated space.
Sophisticated crypto scams are also likely to proliferate on the platform. Fake metaverses could be announced as a subterfuge to access victims’ login details, while avatars made to impersonate colleagues or close friends could manipulate victims into sharing sensitive information.
To counter these threats, financial regulators are formulating plans to bring cryptocurrencies under regulatory control. While there will be obstacles to overcome in reconciling approaches between different jurisdictions, AML controls could be transposed to the metaverse – for example, through adopting it into existing regulatory structures and enforcing compliance screening into secondary marketplaces and crypto exchanges.
The complexities of mitigating threats posed by the metaverse
The threats posed by the metaverse are complex, and unsurprisingly there is no simple solution to reducing them. Removing anonymity from metaverse platforms could stop criminal or abusive actors from hiding behind anonymous avatars. However, the identity verification checks required to carry this out require users to trust the platforms with sensitive personal data, which is unlikely to be popular in the wake of the many data processing scandals that have plagued social media companies in the past five years.
Instead, monitoring behaviour on the metaverse may be the key to protecting its users. But without monitoring all interactions in real time, it will be difficult to swiftly identify and put a stop to any illicit or harmful activity. Such an extensive monitoring approach has practical, technical and ethical challenges. Finding the balance between privacy, safety, autonomy and moderation will be crucial.
Mainstream social media platforms have struggled with content moderation for years. It is almost impossible for human moderators to check the millions of posts sent to them daily for review, and the psychological effects on moderators of repeatedly witnessing violent and abusive content are well-documented. Attempts to replace human moderators with AI are unlikely to present an acceptable solution in the next few years: only last month, Google blocked the launch of Donald Trump’s Truth Social from its Play Store over concerns surrounding the efficacy of its AI-facilitated content moderation.
Science fiction writer Neal Stephenson, who originally came up the concept of the metaverse in a dystopian novel, recently told the FT that whether the metaverse is a good or bad thing will depend on its development and use. As developers forge ahead, we are entering unchartered territory. There is certainly room for cautious optimism, as policymakers and metaverse creators are taking steps to reconcile their distinct ambitions in the sphere. But there is a long way to go before metaverse users will be truly safe from online harm in the new virtual world.
Privacy Policy.
Revoke consent.
© Digitalis Media Ltd. Privacy Policy.
Digitalis
We firmly believe that the internet should be available and accessible to anyone, and are committed to providing a website that is accessible to the widest possible audience, regardless of circumstance and ability.
To fulfill this, we aim to adhere as strictly as possible to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines 2.1 (WCAG 2.1) at the AA level. These guidelines explain how to make web content accessible to people with a wide array of disabilities. Complying with those guidelines helps us ensure that the website is accessible to all people: blind people, people with motor impairments, visual impairment, cognitive disabilities, and more.
This website utilizes various technologies that are meant to make it as accessible as possible at all times. We utilize an accessibility interface that allows persons with specific disabilities to adjust the website’s UI (user interface) and design it to their personal needs.
Additionally, the website utilizes an AI-based application that runs in the background and optimizes its accessibility level constantly. This application remediates the website’s HTML, adapts Its functionality and behavior for screen-readers used by the blind users, and for keyboard functions used by individuals with motor impairments.
If you’ve found a malfunction or have ideas for improvement, we’ll be happy to hear from you. You can reach out to the website’s operators by using the following email webrequests@digitalis.com
Our website implements the ARIA attributes (Accessible Rich Internet Applications) technique, alongside various different behavioral changes, to ensure blind users visiting with screen-readers are able to read, comprehend, and enjoy the website’s functions. As soon as a user with a screen-reader enters your site, they immediately receive a prompt to enter the Screen-Reader Profile so they can browse and operate your site effectively. Here’s how our website covers some of the most important screen-reader requirements, alongside console screenshots of code examples:
Screen-reader optimization: we run a background process that learns the website’s components from top to bottom, to ensure ongoing compliance even when updating the website. In this process, we provide screen-readers with meaningful data using the ARIA set of attributes. For example, we provide accurate form labels; descriptions for actionable icons (social media icons, search icons, cart icons, etc.); validation guidance for form inputs; element roles such as buttons, menus, modal dialogues (popups), and others. Additionally, the background process scans all of the website’s images and provides an accurate and meaningful image-object-recognition-based description as an ALT (alternate text) tag for images that are not described. It will also extract texts that are embedded within the image, using an OCR (optical character recognition) technology. To turn on screen-reader adjustments at any time, users need only to press the Alt+1 keyboard combination. Screen-reader users also get automatic announcements to turn the Screen-reader mode on as soon as they enter the website.
These adjustments are compatible with all popular screen readers, including JAWS and NVDA.
Keyboard navigation optimization: The background process also adjusts the website’s HTML, and adds various behaviors using JavaScript code to make the website operable by the keyboard. This includes the ability to navigate the website using the Tab and Shift+Tab keys, operate dropdowns with the arrow keys, close them with Esc, trigger buttons and links using the Enter key, navigate between radio and checkbox elements using the arrow keys, and fill them in with the Spacebar or Enter key.Additionally, keyboard users will find quick-navigation and content-skip menus, available at any time by clicking Alt+1, or as the first elements of the site while navigating with the keyboard. The background process also handles triggered popups by moving the keyboard focus towards them as soon as they appear, and not allow the focus drift outside of it.
Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.
We aim to support the widest array of browsers and assistive technologies as possible, so our users can choose the best fitting tools for them, with as few limitations as possible. Therefore, we have worked very hard to be able to support all major systems that comprise over 95% of the user market share including Google Chrome, Mozilla Firefox, Apple Safari, Opera and Microsoft Edge, JAWS and NVDA (screen readers), both for Windows and for MAC users.
Despite our very best efforts to allow anybody to adjust the website to their needs, there may still be pages or sections that are not fully accessible, are in the process of becoming accessible, or are lacking an adequate technological solution to make them accessible. Still, we are continually improving our accessibility, adding, updating and improving its options and features, and developing and adopting new technologies. All this is meant to reach the optimal level of accessibility, following technological advancements. For any assistance, please reach out to webrequests@digitalis.com