A riff - My analogy for LLMs
Some days LLMs impress me (floor me even), other days they seem like just a neat but flawed party trick. It’s been hard to wrap my head around. But the best analogy I’ve been able to think of is LLMs as a lossy compression of the internet, like a JPEG is to an image. when you zoom in on a JPEG, if you smooth the pixels everything becomes blurry and indistinct, but if you upscale it with an AI algorithm it will become distinct again, but with details that were not in the original data. LLMs, I’ve noticed are very similar. Great for high level concepts but the more you drill down, it’s like zooming in on that JPEG and that’s where the hallucinations lie, LLMs are trying to “upscale” the data for you, but it’s not at all obvious where that border lies between well represented information and hallucination, that is, when are you zooming in too much?
What do you think? Is this a good analogy? Have you had frustrating experiences with hallucinations? Has an LLM done anything that just floored you?