Loading Logo

What do recent AI developments mean for privacy and reputation?

August 2023
 by Firas Soualmia

What do recent AI developments mean for privacy and reputation?

August 2023
 By Firas Soualmia

As the hype around generative AI continues, businesses and individuals are seeking to leverage its capabilities in the hope of enhancing their offerings and staying relevant in an ever-changing media landscape. From big tech companies working on their in-house chatbots to rival ChatGPT, to artists and content creators adopting AI to optimise and streamline their output, conversations about this capability are top of mind in the boardroom and a hot topic around the water cooler.

Although in its infancy, generative AI is an exciting technology with the capacity to increase efficiency and enable new solutions across multiple industries. But recent developments have exposed issues around privacy and ethics, as well as concerns about the veracity of the information disseminated by AI models.

How do generative AI systems work?

As outlined recently in our article on what happens when AI gets it wrong, generative AI systems work by collecting, organising and processing huge amounts of data from websites and other online sources, and providing answers to users’ queries using technology that identifies patterns in language to make predictions about what words should come next, in a similar way to an autocomplete function.

But as AI does not think like a human or understand at its core what it is saying, it cannot always distinguish fact from fiction – and it is only as accurate as the data it sources. Because their answers are generated from multiple sources blended together in different ways, different AI tools sometimes provide contrasting and conflicting answers to the same question – and some content generated by AI tools has been found to be inaccurate or misleading.

AI and the data privacy issue

AI chatbots also raise concerns relating to data privacy. Machine learning – the principal technology feeding AI systems – relies on large amounts of personal data gleaned from various sources to train AI models and improve performance. There are questions about how these datasets are collected, processed, and stored, and concerns about the potential for data breaches or the malicious use of personal and sensitive information. Google has recently been hit with a class action lawsuit for scraping personal information and copyrighted material to train its chatbot Bard and other generative AI systems. The material mentioned in the lawsuit includes TikTok videos, photos scraped from dating websites, and Spotify playlists.

Generative AI can also use personal data to create fake profiles and spread manipulative narratives, as part of misinformation and disinformation campaigns. This poses a reputational threat to corporates and public figures, who, if targeted, can suffer substantial damage resulting from the misuse of AI tools.

AI’s source material and concerns about accuracy

As well as the issue of data privacy, there are further concerns about the accuracy of the source material used by generative AI models. The methods used to train algorithms, including the use of biased sources of information, pose the threat of spreading unfiltered, damaging content and perpetuating existing biases such as racial prejudice, stereotypes, and conspiracy theories. For business leaders and individuals, AI’s predisposition to draw conclusions and answer search queries based (at least partially) on opinion pieces and dated news articles can be misleading and potentially damaging.

AI models use sources ranging from news websites and academic journals to social media platforms and online forums. This expansive mass of material, which may include opinion, conjecture, and misinformation, calls into question the accuracy of the information that AI chatbots present back to users as hard facts. And while big tech is making efforts to clean scraped data before training its AI models, the efficacy of these efforts has been questioned.

A recent investigation into a dataset assembled by Google to train the search engine’s LaMDA AI, as well as Meta’s GPT challenger LLaMA, exposed the biased and malicious sources from which data is sometimes harvested. The Colossal Clean Crawled Corpus dataset, or C4 in short, was compiled using information from over 15 million websites, including the white nationalist website VDARE, the Russian state-sponsored propaganda site RT, the far-right news site Breitbart, and the now-defunct pirated ebooks website Bookzz.

How generative AI can strengthen narratives

For businesses, streamlined and large-scale content creation is one of the biggest advantages of generative AI, as chatbots can be used to formulate articles, blogs, or even academic essays that mimic human reasoning. But while this saves businesses time and resources, bot-generated content can also be harnessed for less public-spirited purposes.

Given the ease of access and the lack of oversight of AI tools, businesses and individuals can easily find themselves victims of structured smear campaigns. Binance’s founder Changpeng Zhao has recently been targeted by a ChatGPT-powered smear campaign alleging a link between the crypto tycoon and the Chinese Communist Party (CCP). Upon investigation, it transpired that the information ChatGPT collected came from a fake LinkedIn profile and a non-existent Forbes article.

Altered images and AI

AI’s capability isn’t limited to textual information, and beyond the humorous Balenciaga x Pope collab, AI-generated images have recently been the subject of hot debate. Apart from intellectual property concerns (since AI tools use both open-source and copyrighted material to create content), the technology’s capability to generate hyper-realistic fake images further blurs the line between what is real and what is constructed to mislead.

The recent AI-generated altered images shared by Florida Governor Ron DeSantis’s campaign have stoked the fire of debate on this issue. The doctored pictures depict Donald Trump hugging and kissing his bête noire, Dr Anthony Fauci. While peddling manipulative narratives isn’t a novel concept in politics, the use of AI to strengthen narratives and shape public opinion is.

Minding the gap: mitigating the risks posed by generative AI

AI will continue to affect industries and shape conversations about the role of advanced tech in business, politics, and digital media. But like any other transformative technology, generative AI is not devoid of risk: the source material used to train algorithms and the often-misleading information collected and presented by AI bots can cause reputational damage to business leaders and individuals.

There are steps that business leaders and public figures can take to mitigate these risks: taking ownership of their online presence will inject resilience and better enable them to combat any potentially harmful narratives that are surfaced by generative AI. A proactive approach can be achieved by optimising corporate messaging online, and filling the gaps with relevant and balanced content. This will greatly limit the space in which damaging AI-generated content can thrive, whether it’s purposefully created and circulated by malicious actors, or simply generated by benign but embryonic AI systems.

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.