There is huge excitement about ChatGPT and other large generative language models that produce fluent and human-like texts in English and other human languages. But these models have one big drawback, which is that their texts can be factually incorrect (hallucination) and also leave out key information (omission).

In our chapter for The Oxford Handbook of Lying, we look at hallucinations, omissions, and other aspects of “lying” in computer-generated texts. We conclude that these problems are probably inevitable.

  • My issue is using humanizing language which adds a lot of baggage and implied assumptions which are miss leading. Bottom line is that these things just give crazy results at times through design and the data you put it and even at best they are rather dumb. Maybe useful but the user has to know their limitations and verify their results.