Member-only story

Reality-checking AI hallucinations: a guide for mortals

Product Alpaca
5 min readJan 9, 2024

--

Image source: myself, wrestling DALL-E

You probably have heard of AI hallucinations by now, it’s been all over the media. In fact, it’s been such a big topic that the American Dialect Society has included ‘stochastic parrot’ as one of the notable additions to the vocabulary of 2023:

Stochastic parrot: large language model that can generate plausible synthetic text without having any understanding.

Thing is, there are two types of systems at play: deterministic and probabilistic. Deterministic ones follow a set of explainable algorithms, and with the same input you will always get the same output. Like with a calculator, for example. Probabilistic ones are more fluid, and the output may vary, even when the input is the same.

However advanced and human-like, the LLMs simply mimic and replicate language patterns to be accepted as legitimate by the users. Like a parrot, a particularly stochastic one at that. But the content these parrots/models produce does not come from actual thought or understanding. It comes from repeating tons, tons of text data in a highly probable, realistic-sounding way.

AI ‘hallucination’ is therefore a misnomer, as it is not like a real hallucination in humans (coming from mental shortcuts, generalizations, expectations or physical sources). This distortion is the basis of how and why the…

--

--

Product Alpaca
Product Alpaca

Written by Product Alpaca

Thoughts on product, tech, UX and everything in between.

Responses (1)