As the hype around generative AI continues, businesses and individuals are seeking to leverage its capabilities in the hope of enhancing their offerings and staying relevant in an ever-changing media landscape. From big tech companies working on their in-house chatbots to rival ChatGPT, to artists and content creators adopting AI to optimise and streamline their output, conversations about this capability are top of mind in the boardroom and a hot topic around the water cooler.
Although in its infancy, generative AI is an exciting technology with the capacity to increase efficiency and enable new solutions across multiple industries. But recent developments have exposed issues around privacy and ethics, as well as concerns about the veracity of the information disseminated by AI models.
How do generative AI systems work?
As outlined recently in our article on what happens when AI gets it wrong, generative AI systems work by collecting, organising and processing huge amounts of data from websites and other online sources, and providing answers to users’ queries using technology that identifies patterns in language to make predictions about what words should come next, in a similar way to an autocomplete function.
But as AI does not think like a human or understand at its core what it is saying, it cannot always distinguish fact from fiction – and it is only as accurate as the data it sources. Because their answers are generated from multiple sources blended together in different ways, different AI tools sometimes provide contrasting and conflicting answers to the same question – and some content generated by AI tools has been found to be inaccurate or misleading.
AI and the data privacy issue
AI chatbots also raise concerns relating to data privacy. Machine learning – the principal technology feeding AI systems – relies on large amounts of personal data gleaned from various sources to train AI models and improve performance. There are questions about how these datasets are collected, processed, and stored, and concerns about the potential for data breaches or the malicious use of personal and sensitive information. Google has recently been hit with a class action lawsuit for scraping personal information and copyrighted material to train its chatbot Bard and other generative AI systems. The material mentioned in the lawsuit includes TikTok videos, photos scraped from dating websites, and Spotify playlists.
Generative AI can also use personal data to create fake profiles and spread manipulative narratives, as part of misinformation and disinformation campaigns. This poses a reputational threat to corporates and public figures, who, if targeted, can suffer substantial damage resulting from the misuse of AI tools.
AI’s source material and concerns about accuracy
As well as the issue of data privacy, there are further concerns about the accuracy of the source material used by generative AI models. The methods used to train algorithms, including the use of biased sources of information, pose the threat of spreading unfiltered, damaging content and perpetuating existing biases such as racial prejudice, stereotypes, and conspiracy theories. For business leaders and individuals, AI’s predisposition to draw conclusions and answer search queries based (at least partially) on opinion pieces and dated news articles can be misleading and potentially damaging.
AI models use sources ranging from news websites and academic journals to social media platforms and online forums. This expansive mass of material, which may include opinion, conjecture, and misinformation, calls into question the accuracy of the information that AI chatbots present back to users as hard facts. And while big tech is making efforts to clean scraped data before training its AI models, the efficacy of these efforts has been questioned.
A recent investigation into a dataset assembled by Google to train the search engine’s LaMDA AI, as well as Meta’s GPT challenger LLaMA, exposed the biased and malicious sources from which data is sometimes harvested. The Colossal Clean Crawled Corpus dataset, or C4 in short, was compiled using information from over 15 million websites, including the white nationalist website VDARE, the Russian state-sponsored propaganda site RT, the far-right news site Breitbart, and the now-defunct pirated ebooks website Bookzz.
How generative AI can strengthen narratives
For businesses, streamlined and large-scale content creation is one of the biggest advantages of generative AI, as chatbots can be used to formulate articles, blogs, or even academic essays that mimic human reasoning. But while this saves businesses time and resources, bot-generated content can also be harnessed for less public-spirited purposes.
Given the ease of access and the lack of oversight of AI tools, businesses and individuals can easily find themselves victims of structured smear campaigns. Binance’s founder Changpeng Zhao has recently been targeted by a ChatGPT-powered smear campaign alleging a link between the crypto tycoon and the Chinese Communist Party (CCP). Upon investigation, it transpired that the information ChatGPT collected came from a fake LinkedIn profile and a non-existent Forbes article.
Altered images and AI
AI’s capability isn’t limited to textual information, and beyond the humorous Balenciaga x Pope collab, AI-generated images have recently been the subject of hot debate. Apart from intellectual property concerns (since AI tools use both open-source and copyrighted material to create content), the technology’s capability to generate hyper-realistic fake images further blurs the line between what is real and what is constructed to mislead.
The recent AI-generated altered images shared by Florida Governor Ron DeSantis’s campaign have stoked the fire of debate on this issue. The doctored pictures depict Donald Trump hugging and kissing his bête noire, Dr Anthony Fauci. While peddling manipulative narratives isn’t a novel concept in politics, the use of AI to strengthen narratives and shape public opinion is.
Minding the gap: mitigating the risks posed by generative AI
AI will continue to affect industries and shape conversations about the role of advanced tech in business, politics, and digital media. But like any other transformative technology, generative AI is not devoid of risk: the source material used to train algorithms and the often-misleading information collected and presented by AI bots can cause reputational damage to business leaders and individuals.
There are steps that business leaders and public figures can take to mitigate these risks: taking ownership of their online presence will inject resilience and better enable them to combat any potentially harmful narratives that are surfaced by generative AI. A proactive approach can be achieved by optimising corporate messaging online, and filling the gaps with relevant and balanced content. This will greatly limit the space in which damaging AI-generated content can thrive, whether it’s purposefully created and circulated by malicious actors, or simply generated by benign but embryonic AI systems.
Privacy Policy.
Revoke consent.
© Digitalis Media Ltd. Privacy Policy.
Digitalis
We firmly believe that the internet should be available and accessible to anyone, and are committed to providing a website that is accessible to the widest possible audience, regardless of circumstance and ability.
To fulfill this, we aim to adhere as strictly as possible to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines 2.1 (WCAG 2.1) at the AA level. These guidelines explain how to make web content accessible to people with a wide array of disabilities. Complying with those guidelines helps us ensure that the website is accessible to all people: blind people, people with motor impairments, visual impairment, cognitive disabilities, and more.
This website utilizes various technologies that are meant to make it as accessible as possible at all times. We utilize an accessibility interface that allows persons with specific disabilities to adjust the website’s UI (user interface) and design it to their personal needs.
Additionally, the website utilizes an AI-based application that runs in the background and optimizes its accessibility level constantly. This application remediates the website’s HTML, adapts Its functionality and behavior for screen-readers used by the blind users, and for keyboard functions used by individuals with motor impairments.
If you’ve found a malfunction or have ideas for improvement, we’ll be happy to hear from you. You can reach out to the website’s operators by using the following email webrequests@digitalis.com
Our website implements the ARIA attributes (Accessible Rich Internet Applications) technique, alongside various different behavioral changes, to ensure blind users visiting with screen-readers are able to read, comprehend, and enjoy the website’s functions. As soon as a user with a screen-reader enters your site, they immediately receive a prompt to enter the Screen-Reader Profile so they can browse and operate your site effectively. Here’s how our website covers some of the most important screen-reader requirements, alongside console screenshots of code examples:
Screen-reader optimization: we run a background process that learns the website’s components from top to bottom, to ensure ongoing compliance even when updating the website. In this process, we provide screen-readers with meaningful data using the ARIA set of attributes. For example, we provide accurate form labels; descriptions for actionable icons (social media icons, search icons, cart icons, etc.); validation guidance for form inputs; element roles such as buttons, menus, modal dialogues (popups), and others. Additionally, the background process scans all of the website’s images and provides an accurate and meaningful image-object-recognition-based description as an ALT (alternate text) tag for images that are not described. It will also extract texts that are embedded within the image, using an OCR (optical character recognition) technology. To turn on screen-reader adjustments at any time, users need only to press the Alt+1 keyboard combination. Screen-reader users also get automatic announcements to turn the Screen-reader mode on as soon as they enter the website.
These adjustments are compatible with all popular screen readers, including JAWS and NVDA.
Keyboard navigation optimization: The background process also adjusts the website’s HTML, and adds various behaviors using JavaScript code to make the website operable by the keyboard. This includes the ability to navigate the website using the Tab and Shift+Tab keys, operate dropdowns with the arrow keys, close them with Esc, trigger buttons and links using the Enter key, navigate between radio and checkbox elements using the arrow keys, and fill them in with the Spacebar or Enter key.Additionally, keyboard users will find quick-navigation and content-skip menus, available at any time by clicking Alt+1, or as the first elements of the site while navigating with the keyboard. The background process also handles triggered popups by moving the keyboard focus towards them as soon as they appear, and not allow the focus drift outside of it.
Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.
We aim to support the widest array of browsers and assistive technologies as possible, so our users can choose the best fitting tools for them, with as few limitations as possible. Therefore, we have worked very hard to be able to support all major systems that comprise over 95% of the user market share including Google Chrome, Mozilla Firefox, Apple Safari, Opera and Microsoft Edge, JAWS and NVDA (screen readers), both for Windows and for MAC users.
Despite our very best efforts to allow anybody to adjust the website to their needs, there may still be pages or sections that are not fully accessible, are in the process of becoming accessible, or are lacking an adequate technological solution to make them accessible. Still, we are continually improving our accessibility, adding, updating and improving its options and features, and developing and adopting new technologies. All this is meant to reach the optimal level of accessibility, following technological advancements. For any assistance, please reach out to webrequests@digitalis.com