Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • That’s literally how llma work, they quite literally are just next word predictors. There is zero intelligence to them.

    It’s literally a while token is not “stop”, predict next token.

    It’s just that they are pretty good at predicting the next token so it feels like intelligence.

    So on your graph, it would be a vertical line at 0.

    • This is true if you describe a pure llm, like gpt3

      However systems like claude, gpt4o and 1o are far from just a single llm, they are a blend of tailored llms, machine learning some old fashioned code to weave it all together.

      Op does ask “modern llm” so technically you are right but i believed they did mean the more advanced “products”

      Though i would not be able to actually answer ops questions, ai is hard to directly compare with a human.

      In most ways its embarrassingly stupid, in other it has already surpassed us.