Ahead of the recent Super Bowl, online discourse was focused not only on what would take place on the field, but also on the advertisements that often act as markers of where the Western zeitgeist is headed. One campaign that sparked widespread debate came from Anthropic, the AI company behind the Claude chatbot. Its ads presented exaggerated scenarios designed to highlight how advertising could reshape the way people use chatbots. In one advert, a young man exercising and hoping for a six-pack is told by a muscular older figure – meant to represent a chatbot – that “confidence isn’t just built in the gym” and advised to buy insoles that “help short kings stand tall”. Each advert ends with the same tagline: “Ads are coming to AI. But not to Claude.”
The campaign arrived just weeks after OpenAI, the maker of ChatGPT, announced plans to introduce adverts for users on its free service, alongside a new subscription tier called ChatGPT Go. It also followed the launch of ChatGPT Health for US audiences, a feature that allows users to analyse medical records in order to receive more tailored responses.
OpenAI CEO Sam Altman found humour in Anthropic’s campaign but described it as “dishonest” and “deceptive”, clarifying that OpenAI would never introduce ads directly into chatbot responses in the way portrayed. In a post on X, he rationalised that advertising is necessary to provide free AI access at scale, suggesting that Anthropic focuses on selling premium products to wealthier users while OpenAI is trying to reach a broader audience. The company, he said, feels a responsibility to make AI accessible to “billions of people who can’t pay for subscriptions”.
A week on from the Super Bowl, daily active users of Anthropic’s Claude chatbot increased by 11% and visits to its website jumped 6.5%. The spike in engagement pushed Claude into the top 10 free apps on the Apple App Store, surpassing competitors Meta, Google Gemini, and, notably, OpenAI.
Beyond the public disagreement sits a broader question about how AI is funded, who it is built for, and what compromises users are being asked to accept. When the product is free, the age-old question resurfaces: does the user ultimately become the commodity?
Dr AI will see you now
Many of us are familiar with how difficult it can be to secure a doctor’s appointment, with the ordeal of securing one sometimes seeming more painful than the health issue that prompted it.
In the UK, the NHS has begun incorporating AI in targeted areas, from radiology tools that help flag early signs of cancer to systems that transcribe and summarise clinical consultations to ease the administrative load on doctors. Private providers such as Bupa and Nuffield Health have moved more quickly, embedding AI-driven triage and symptom checkers into their offerings. While the NHS has largely positioned AI as a support for clinicians rather than a substitute for them, consumer-facing platforms have raced ahead with far less oversight.
Therefore, it is unsurprising that some of us are turning to AI with questions about our health. OpenAI says more than 230 million people seek health and wellness advice from ChatGPT each week, presenting the chatbot as an ally that helps users navigate insurance, manage paperwork, and advocate for themselves.
In reality, this usually means sharing highly sensitive personal information, including diagnoses, medications, test results, and medical histories. While these exchanges can resemble a conversation with a physician, they take place under very different conditions. Crucially, tech companies are not bound by the same legal or ethical responsibilities as healthcare providers – there is no equivalent of doctor–patient confidentiality, no duty of care, and no regulator overseeing how advice is framed or how vulnerable users are treated.
When benchmarks meet real people
Much of the optimism around AI in healthcare comes from benchmark tests, where large language models (LLMs) appear to perform well, recalling symptoms, suggesting diagnoses, and explaining treatments with confidence.
However, a study from the University of Oxford shows how quickly that promise breaks down in real-world use. Outside of controlled settings, people do not describe their health in neat prompts, and medical evaluations rarely resemble multiple-choice exams. Once these systems meet real users, performance drops sharply, not because the models lack medical knowledge, but because people leave out key details and the models misinterpret what they are given.
Tested in isolation, models correctly identified medical conditions nearly 95% of the time. Paired with human users, that figure fell below 35%, and performance was often worse than when participants used traditional online search.
Communication sits at the centre of that decline. Users struggle to explain what they are experiencing, and models misread vague prompts or offer inconsistent guidance. In several cases, the correct diagnosis appears somewhere in the conversation, but the user does not recognise it in their final assessment. In one instance, two people describing similar symptoms associated with subarachnoid haemorrhage, a rare type of stroke, received opposite advice. One was told to lie down and the other was urged to seek emergency care.
A study published in Nature Medicine echoes those concerns. Researchers at the Icahn School of Medicine at Mount Sinai assessed ChatGPT Health using 60 realistic patient scenarios and found that, in more than half of cases requiring immediate emergency care, the system advised users to stay home or book routine appointments instead. The tool performed well in obvious emergencies but was less reliable when symptoms were less clearly defined. In simulations involving suicidal ideation, crisis safeguards appeared inconsistently and, in some cases, disappeared when unrelated clinical details were introduced.
Benchmarks measure recall and pattern recognition, but they do not capture uncertainty, anxiety, or the way people actually talk about their health.
Mental health: high uptake, uneven outcomes
These limitations become even more pronounced in mental health, where chatbots have emerged as one of the most common ways people seek support, particularly as access to other forms of support becomes harder to secure. Polling by Mental Health UK suggests that more than one in three UK adults now use AI tools to support their mental health or wellbeing, with many drawn to a system that, amid long waiting lists and overstretched services, offers something that feels immediate, personalised, anonymous, and accessible.
Many users report positive experiences, with some finding practical coping mechanisms and reassurance during difficult moments.
At the same time, the same research points to more troubling outcomes, with a minority reporting worsened psychosis symptoms, exposure to harmful information about suicide, or being triggered to think about self-harm.
Mental Health UK called for independent safety testing, clearer guidance for users, and greater transparency around data handling, stressing that AI should complement professional care rather than replace it. Taken together, these mixed mental health outcomes expose a broader tension between accessibility and safety.
The privacy paradox
Alongside questions of accuracy and safe use sits a quieter but still important concern about what happens to the data people share.
OpenAI says health-related uploads are stored separately and not used to train its models. Legal experts note that these assurances rest on company policy rather than binding regulation. Unlike hospitals or doctors’ surgeries, most AI platforms are not formally subject to healthcare privacy enforcement.
Products are often marketed as ‘HIPAA ready’, a US standard for handling medical data. In practice, this is a marketing claim and not evidence of clinical oversight; the label is self-applied, with no official certification body behind it, and most consumer-facing AI platforms fall outside HIPAA’s enforcement scope altogether. At the same time, companies sidestep medical device regulation by insisting their tools are not intended for diagnosis or treatment, even as people use them to interpret test results and make health decisions.
When healthcare meets advertising
Once advertising arrives, the debate shifts again.
OpenAI has begun testing sponsored links inside chatbot conversations as it searches for sustainable revenue models, amid growing concern that the AI bubble may be close to bursting. Even if ads remain technically separate from responses, monetisation brings new priorities around engagement, retention, and conversion.
In health-related contexts, this shift carries real consequences, as moments of vulnerability can quietly be reframed as promotional opportunities. Consumer surveys suggest many users are open to ad-supported services in exchange for free access, but most also worry about their data being used to train AI systems and overwhelmingly want greater transparency.
A system still finding its footing
Today’s LLMs appear impressively knowledgeable on paper, but they still struggle with the messy, unpredictable reality of real conversations. This makes them hard to rely on safely at scale.
The Super Bowl ads frame this as a clash of business models, reflecting a sector still negotiating its boundaries between access and accuracy, convenience and privacy, innovation and responsibility. Move fast and break things.
AI already plays a meaningful role in helping many people manage their health, often filling gaps left by overstretched systems. For some, it offers reassurance when no professional is available, but adoption has raced ahead of regulation, and monetisation is arriving before clear guardrails are in place.
If AI is to become a genuine support system rather than a high-risk substitute for care, evaluation will need to move beyond benchmark tests and towards real-world user studies. Privacy protections will need to be enforceable, not optional. And commercial incentives will also require closer scrutiny, especially in sensitive domains such as health.
The Super Bowl ads were funny, but the stakes are not.
Privacy Policy.
Revoke consent.
© Digitalis Media Ltd. Privacy Policy.
Digitalis
We firmly believe that the internet should be available and accessible to anyone, and are committed to providing a website that is accessible to the widest possible audience, regardless of circumstance and ability.
To fulfill this, we aim to adhere as strictly as possible to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines 2.1 (WCAG 2.1) at the AA level. These guidelines explain how to make web content accessible to people with a wide array of disabilities. Complying with those guidelines helps us ensure that the website is accessible to all people: blind people, people with motor impairments, visual impairment, cognitive disabilities, and more.
This website utilizes various technologies that are meant to make it as accessible as possible at all times. We utilize an accessibility interface that allows persons with specific disabilities to adjust the website’s UI (user interface) and design it to their personal needs.
Additionally, the website utilizes an AI-based application that runs in the background and optimizes its accessibility level constantly. This application remediates the website’s HTML, adapts Its functionality and behavior for screen-readers used by the blind users, and for keyboard functions used by individuals with motor impairments.
If you’ve found a malfunction or have ideas for improvement, we’ll be happy to hear from you. You can reach out to the website’s operators by using the following email webrequests@digitalis.com
Our website implements the ARIA attributes (Accessible Rich Internet Applications) technique, alongside various different behavioral changes, to ensure blind users visiting with screen-readers are able to read, comprehend, and enjoy the website’s functions. As soon as a user with a screen-reader enters your site, they immediately receive a prompt to enter the Screen-Reader Profile so they can browse and operate your site effectively. Here’s how our website covers some of the most important screen-reader requirements, alongside console screenshots of code examples:
Screen-reader optimization: we run a background process that learns the website’s components from top to bottom, to ensure ongoing compliance even when updating the website. In this process, we provide screen-readers with meaningful data using the ARIA set of attributes. For example, we provide accurate form labels; descriptions for actionable icons (social media icons, search icons, cart icons, etc.); validation guidance for form inputs; element roles such as buttons, menus, modal dialogues (popups), and others. Additionally, the background process scans all of the website’s images and provides an accurate and meaningful image-object-recognition-based description as an ALT (alternate text) tag for images that are not described. It will also extract texts that are embedded within the image, using an OCR (optical character recognition) technology. To turn on screen-reader adjustments at any time, users need only to press the Alt+1 keyboard combination. Screen-reader users also get automatic announcements to turn the Screen-reader mode on as soon as they enter the website.
These adjustments are compatible with all popular screen readers, including JAWS and NVDA.
Keyboard navigation optimization: The background process also adjusts the website’s HTML, and adds various behaviors using JavaScript code to make the website operable by the keyboard. This includes the ability to navigate the website using the Tab and Shift+Tab keys, operate dropdowns with the arrow keys, close them with Esc, trigger buttons and links using the Enter key, navigate between radio and checkbox elements using the arrow keys, and fill them in with the Spacebar or Enter key.Additionally, keyboard users will find quick-navigation and content-skip menus, available at any time by clicking Alt+1, or as the first elements of the site while navigating with the keyboard. The background process also handles triggered popups by moving the keyboard focus towards them as soon as they appear, and not allow the focus drift outside of it.
Users can also use shortcuts such as “M” (menus), “H” (headings), “F” (forms), “B” (buttons), and “G” (graphics) to jump to specific elements.
We aim to support the widest array of browsers and assistive technologies as possible, so our users can choose the best fitting tools for them, with as few limitations as possible. Therefore, we have worked very hard to be able to support all major systems that comprise over 95% of the user market share including Google Chrome, Mozilla Firefox, Apple Safari, Opera and Microsoft Edge, JAWS and NVDA (screen readers), both for Windows and for MAC users.
Despite our very best efforts to allow anybody to adjust the website to their needs, there may still be pages or sections that are not fully accessible, are in the process of becoming accessible, or are lacking an adequate technological solution to make them accessible. Still, we are continually improving our accessibility, adding, updating and improving its options and features, and developing and adopting new technologies. All this is meant to reach the optimal level of accessibility, following technological advancements. For any assistance, please reach out to webrequests@digitalis.com