Popular AI chatbots from OpenAI Inc., Google LLC, and Meta Platforms Inc. are prone to “hallucinations” when answering legal questions, posing special risks for people using the technology because they can’t afford a human lawyer, new research from Stanford University said.
Legal questions are very case sensitive, no pun intended. It’s like asking an extremely specific programming implementation question. LLMs don’t do very well with those types of prompts because the narrower the focus, the less of its training data applies to it and the more likely it’ll just straight up hallucinate. And they don’t yet have the nuance necessary to determine that an area of case law may not be settled and is in a legal grey area.