Letters
When Models are Confident — and Wrong: Language models like ChatGPT need a way to express degrees of confidence.
One of the dangers of large language models (LLMs) is that they can confidently make assertions that are blatantly false. This raises worries that they will flood the world with misinformation. If they could moderate their degree of confidence appropriately, they would be less likely to mislead.