cross-posted from: https://lemmy.ml/post/20858435

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

  • @ContrarianTrail @JRepin well I guess somebody would first need to clearly define what “AGI” is. Currently it’s just “whatever the techbro hypers want it to be”.

    And then there’s the matter (ha!) of your assumption that we understand all laws of physics necessary that “matter obeys”, or that we can reasonably understand them. That’s a pretty strong assumption: individual human minds are pretty limited and communication adds overhead, and we might reach a point where we’re stuck.

    • A chess engine is intelligent in one thing: playing chess. That narrow intelligence doesn’t translate to any other skill, even if it’s sometimes superhuman at that one task, like a calculator.

      Humans, on the other hand, are generally intelligent. We can perform a variety of cognitive tasks that are unrelated to each other, with our only limitations being the physical ones of our “meat computer.”

      Artificial General Intelligence (AGI) is the artificial version of human cognitive capabilities, but without the brain’s limitations. It should be noted that AGI is not synonymous with AI. AGI is a type of AI, but not all AI is generally intelligent. The next step from AGI would be Artificial Super Intelligence (ASI), which would not only be generally intelligent but also superhumanly so. This is what the “AI doomers” are concerned about.