I have a theory that there is a kind of dual-think going on around AI 'hallucination'. Specifically that the only meaningful difference between imagination and what people are calling hallucination is whether or not the outcome is useful.
Complete lay-person viewpoint here of course, outside of toying with some neural networks back in the day.
Most hallucinations I’ve seen “make sense” or “look right”. I guess that’s a certain type of creativity. And it’s not like common sense ideas have never been profitable..
I think the difference is more to do with the fact that 'hallucination' is passed off as reality (whether it's ChatGPT confidently telling you that Abraham Lincoln had a MySpace page, or that weird guy on the train telling you that there are spiders in the seat cushions).
People are usually able to distinguish between their imagined scenarios and the real world.
Complete lay-person viewpoint here of course, outside of toying with some neural networks back in the day.