Loading Logo

Double-check this response

September 2024
 by Barry Smith

Double-check this response

September 2024
 By Barry Smith

As businesses race to push out an AI-powered search engine, even the corporate videos demonstrating them in action contain inaccuracies and misinformation. Can unreliability be tolerated in AI search?

OpenAI, the creators of AI chatbot ‘ChatGPT’, announced a new product in July – ‘SearchGPT’. SearchGPT is a prototype search tool designed to give “fast and timely answers with clear and relevant sources” – ostensibly announcing OpenAI’s entry into the $225+ billion revenue per annum search engine market.

At their demonstration of the tool, OpenAI showcased a search for “music festivals in boone north carolina in august”, for which the top result was ‘An Appalachian Summer Festival’, occurring between July 29th and August 16th. However, as The Atlantic quickly highlighted, these dates were actually when the box office was closed, and the final concert had already been held on July 27th. This was not the first time a demonstration of AI was proven factually incorrect. Links to sources were provided as part of the results, along with short snippets of the sourced pages. However, without clicking through those results to the source, there was no way to recognise that the information presented was in any way incorrect.

A wasted journey and some vacation days would have been the worst outcome in this instance, but what about when AI search is wrong about something more critical? Should we be accepting that searches for medical information or legal research require a further degree of manual verification, or should we be demanding a greater level of self-verification from these new tools?

Increasing scope for errors

No search engine is 100% reliable. Even current models are prone to manipulation by bad actors and erroneous algorithmic interpretations; historically however, search engines haven’t been asked to create new information based on content they have accessed. Generative AI (GenAI) takes answering user queries to the next step, attempting to produce a definitive result based on one or more sources. By extrapolating the content ingested, instead of displaying verbatim, and repackaging (at best) or hallucinating (at worst), the scope for error has increased dramatically.

This push for AI to act on what it knows has led to a new reliance on the ‘gen’ part of GenAI, though without necessarily the same concern for how developed the ‘intelligence’ part is.

Surpassing human abilities?

GenAI is just one part of the ‘NarrowAI’ field. NarrowAI is the application of AI to a single purpose which replicates (or even surpasses) the human ability in that task. The other broad types of AI include Artificial General Intelligence (AGI), one capable of performing a wide variety of tasks at least as well as a human, and Artificial Superintelligence (ASI), one whose knowledge and capabilities could surpass all of humanity’s best efforts to do the same.

Thankfully, we are some ways short of either of those last two.

Common patterns of data

Generative AI can be tasked to a wide range of possibilities: summarising content, coding, or even making ‘art’ in the form of music or video. The explosion of new companies creating platforms built on (or even with) the power of GenAI to do these tasks and others suggests a technology that is fully developed and ready for mainstream audiences. Though as demonstrated above, the truth is that there is still work to be done.

Even the term ‘GenerativeAI’ is a bit of a misnomer. The program is not truly generating anything, it is simply rearranging data into common patterns which it has identified. It cannot innovate or profoundly create, and the generation of ‘new’ information is mostly a byproduct of regrouping elements in a way that a human never would because they know that it is nonsensical.

Inaccuracies and disclaimers

Companies seem to know that their AI is unreliable. Google’s Gemini is one such front-end allowing users to interact with Google AI (the branch of Google dedicated to AI), and their multimodal model chatbot. Here, you can ask any question and get an answer (though sometimes it will elect to not reply, based on what you ask), but below every prompt you input is a small disclaimer “Gemini may display inaccurate info, including about people, so double-check its responses”.

Further to this, Gemini, ChatGPT, and image generation platform Midjourney, all provide a quick and ready way to ‘regenerate’ a response to try and get a better result. It seems that doing the same thing and expecting a different result is not insanity in this case. However, it does highlight that these companies know that they are not getting it right every time or even often enough. Failure seems to be acceptable and discerning the veracity of what has been provided is being pushed back on to the user.

Critics of AI were quick to point this out and, although early versions of each of these platforms initially ran ungoverned, companies now have these bold disclaimers to warn users of their inadequacies and potential inaccuracies.

Legislation and bias

Historically, lawmakers have been slow to legislate new technology, but the pervasive nature of GenAI these past few years has led to several new laws in the US and the EU, with the European Parliament’s ‘EU AI Act’ being an early step in regulation.

This act restricts Unacceptable (biometric identification and cognitive manipulation) and High Risk (product safety and law enforcement) AI usage; however generative AI has not been deemed to be in either category. This may change with more advanced models but for the time being the only requirement is that they may not be used to generate illegal content and must be transparent about what it creates.

However, in instances such as the tragic case of Molly Russell taking her own life after Pinterest and Instagram displayed content glamourising acts of self-harm via their recommendation algorithms, or by looking at analysis by Motoki et al. showing that large language models have an inherent bias towards left-leaning political results (Democrats in the US and Labour in the UK), it does suggest that social and search-based AI can influence the minds and behaviour of individuals; perhaps we should be more concerned about what we’re being exposed to.

The question is, in a world where deepfakes and conspiracy theories can spread unchecked, should we look to AI to help solve the issue or perpetuate it?

Join our newsletter and get access to all the latest information and news:

Privacy Policy.
Revoke consent.

© Digitalis Media Ltd. Privacy Policy.