AI Missteps Aren’t ‘Hallucinations’—They’re Just Bullshit

Artificial intelligence (AI) is increasingly prevalent in our daily lives, from document writing to summarizing PDFs. Yet, many users of AI chatbots like ChatGPT encounter a persistent issue: the generation of inaccurate or fabricated information. This problem is often referred to as “hallucinations,” but this term may be misleading.
As Joe Slater, James Humphries, and Michael Townsen Hicks argue, referring to these errors as “hallucinations” is problematic. According to the late philosopher Harry Frankfurt, “bullshit” is a more accurate term for this phenomenon. Frankfurt defined bullshit as speech that is indifferent to truth, where the speaker has no concern for whether what they say is accurate. AI chatbots, like ChatGPT, fit this description because they produce text without a genuine understanding or concern for truthfulness.
For instance, a lawyer recently faced issues when ChatGPT provided fictitious case citations in a legal brief. This is not an isolated incident. AI chatbots operate using large language models (LLMs) that predict language patterns based on massive datasets. They generate responses based on statistical probabilities rather than actual understanding or truthfulness. This process ensures that while the text might sound convincing, it may not be factually accurate.
The metaphor of “hallucination” fails to capture the essence of this issue. Unlike Macbeth’s hallucination of a floating dagger, which reflects a malfunction of perceptual capacities, AI’s inaccuracies arise because it isn’t attempting to represent reality accurately. Instead, it focuses on producing coherent and humanlike text.
Using precise terminology is crucial for several reasons:
1. Public Understanding : Misleading terms can confuse how people perceive and interact with technology.
:2. Technology Relationship: Incorrect descriptions can lead to false security or anthropomorphization, as seen with self-driving cars and sentient chatbots.
3. Accountability: Clear language helps maintain responsibility for errors, crucial in applications like healthcare where AI use is expanding.
Next time you encounter an AI generating dubious information, remember: it’s not “hallucinating,” it’s just producing bullshit.

Leave a Reply

Your email address will not be published. Required fields are marked *