•  Rhaedas   ( @Rhaedas@kbin.social ) 
        link
        fedilink
        1
        edit-2
        11 months ago

        Not even stupid but just badly trained for that purpose. It’s no different than a LLM asked for coding that gets most of it right but flubs a subroutine. Misalignment doesn’t imply bad or evil, it’s just doing what it thinks the goal really is while we’re ignorant of the results.

      • Yes, I know better, but ask a kid that and perhaps they’d do it. A LLM isn’t thinking though, it’s repeating training through probabilities. And btw, yes, humans can be misaligned with each other, having self goals underneath common ones. Humans think though…well, most of them.