Popular AI chatbots from OpenAI Inc., Google LLC, and Meta Platforms Inc. are prone to “hallucinations” when answering legal questions, posing special risks for people using the technology because they can’t afford a human lawyer, new research from Stanford University said.
I think the most interesting finding in this study is the following:
Which when you think about how language models work, makes a lot of sense. It’s drawing upon trained data sets that match the question being asked. It’s easy to lead it to respond a certain way, because people who talk pro/con certain issues will often use specific kinds of language (such as dog whistles in political issues).
It might also be a side effect of being trained to “chat” with people. There’s a lot of work that goes into getting it to talk amicably with people.
I had a colleague perform a similar experiment on ChatGPT 3. He’s ecoanxious and was noticing how the model was getting gloomier and gloomier in accordance with him, so he tried something. Basically he asked something like “Why is (overpopulated specie) going instinct in (location)?” The model went on to list existential threats to a specie that is everything but endangered. Basically it naively gobbled the loaded question.