• Except that it is, categorically, different. AI doesn’t “learn”, it builds associations between data it samples. Incorporating data from the source itself is how these algorithms work, then they reproduce these pieces with permutations applied.

    LLMs are easier to explain, so I’ll use one as an example. The idea is that if you use particular words in order, that exact ordering of words is given higher weight in the model by a linear association between those words following each other in sequence. When you ask an LLM to “write like Author X”, it can do so (partially) by pulling the weights it generated from that authors’ works.

    This is fundamentally different from how our brains learn and function. We can’t hold databases of billions of pieces of information in our heads and compare them all in real time. It’s not really comparable at all except as an inaccurate metaphor.

    Edit: Too many replies to respond to them all, but our brains don’t do linear algebra on matrices with billions of elements. Our brains work in fundamentally different ways. Conflating the two is a gross oversimplification and is incorrect. That was my entire point.

    • You’re right. It behaves exactly like we do. And yes, it is at a much grander scale.

      Is something ethically, legally, or morally wrong with a computer that does what we do, but does it better?