- cross-posted to:
- elp@lemmy.intai.tech
manitcor ( @manitcor@lemmy.intai.tech ) Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.comEnglish • 1 year ago
cross-posted from: https://lemmy.intai.tech/post/43759
cross-posted from: https://lemmy.world/post/949452
OpenAI’s ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models
Computer numerical simulation is a different kind of shell game from AI. The only reason it’s done is because most differential equations aren’t solvable in the ordinary sense, so instead they’re discretized and approximated. Zeno’s paradox for the modern world. Since the discretization doesn’t work out, they’re then hacked to make the results look right. This is also why they always want more flops, because they believe that, if you just discretize finely enough, you’ll eventually reach infinity (or infinitesimal).
This also should not fill you with hope for general AI.
The same argument could be made for sound, and yet digital computers have no problem approximating it to sufficient precision as to make it indistinguishable from the original.
Discretization works fine and is not the problem. The problem is that the “AI” everyone’s so hyped up about is nothing more than a language model. It has no understanding of what it’s talking about because it has not been taught or allowed to experience anything other than language. Humans use language to express ideas; language-model AIs have no ideas to express because they have no life experience from which to form any ideas.
That doesn’t mean AGI is impossible. It is likely infeasible on present-day hardware, but that’s not the same thing as being impossible.