TGArchive
·1 хв читання · 195 слів·👁 14.1K18

🧬 OpenAI explained why AI hallucinates

OpenAI researchers discovered that AI hallucinations are not a defect, but rather a direct result of the technology.

It all starts with the pretraining stage. Even if the training data included no errors, the model would anyway invent facts.

🤔 Why?

Models generate answers based on patterns like grammar, common facts, logic. However, when asked for a specific fact (especially an obscure one), the model treats it as an isolated point, rather than as part of a bigger picture. So instead of literally retrieving the fact, it gives the most probable-sounding answer.

Even reinforcement learning, which penalizes incorrect answers and rewards correct ones, does not solve this problem. The model ends up like a student at an exam: unsure of the real answer, but confidently spouting nonsense, hoping to guess correctly.

That is why researchers suggest modifying both training and evaluation: penalize confident incorrect replies more harshly than "I don't know." This would encourage the model to "stay quiet" instead of hallucinating.

This strategy could reduce hallucinations and improve the reliability of AI answers, which is more essential in real life than just generating more answers.

#science #OpenAI @hiaimediaen

Відкрити в Telegram
Повернутись до каналу