• Yah, people don’t seem to get that LLM can not consider the meaning or logic of the answers they give. They’re just assembling bits of language in patterns that are likely to come next based on their training data.

    The technology of LLMs is fundamentally incapable of considering choices or doing critical thinking. Maybe new types of models will be able to do that but those models don’t exist yet.