We’ve learned to make “machines that can mindlessly generate text. But we haven’t learned how to stop imagining the mind behind it.”

  • The last point - “We can’t have people eager to separate “human, the biological category, from a person or a unit worthy of moral respect.”” is one I understand where they’re coming from, but am very divided, perhaps because my academic background involves animal rights and ethics.

    The question of analogising animals and humans is so tricky with a very long history - many people have a kneejerk reaction against any analogy of nonhuman animals and (especially marginalised) humans, often for good reasons. For instance, the strongest reason is the history of oppression involving comparisons of marginalised groups to animals, specifically meant to dehumanise and contribute to further oppression/genocide/etc.

    But to my mind, I don’t find the analogies inherently wrong, although they’re often used very clumsily and without care. There’s often a difference in approach that entirely colours people’s responses to it; namely, whether they think it’s trying to drag humans down, or trying to bring nonhuman animals up to having moral status. And that last is imo a worthy endeavour, because I do think that we should to some extent separate “human, the biological category, from a person or a unit worthy of moral respect.” I have moral respect for my dog, which is why I don’t hurt her - it’s because of her own moral worth, not some indirect moral worth as suggested by Kant or various other philosophers.

    I don’t think the debate is the same with AI, at least not yet, and I think it probably shouldn’t be, at least not yet. And I’m also somewhat sceptical of the motivations of people who make these analogies. But that doesn’t mean there’ll never be a place for it - and if a place for it arises it’s just going to need to be done with care, like animal rights needs to be done with care.

    • Yeah, I think trying to draw lines strictly between what ‘deserves’ moral worth and what doesn’t is always going to be tricky, (and outright impossible haha) but I’m of the mind here that we may be reaching a point here with ai that maybe we should just… Play it safe? So to speak? If im interpreting you correctly that you’re saying ai may not be at the point, and might never be, where we intrinsically value it as a moral being in the same way we do animals?

      Like, maybe while the technology develops, we would be better served ethically to just assume that these ai have a bit more internal space than we figure until we can rule it out. Until we even have the tools to rule it out.

      • To some extent, yeah. Especially if we’re in a situation where there’s no massive benefit to treating the AI ‘unethically’. I personally don’t think AI is at a place where it’s got moral value yet, and idk if it ever will be. But I also don’t know enough to trust that I’ll be accurate in my assessment as it grows more and more complex.

        I should also flag that I’m very much a virtue ethicist, and an overall perspective I have on our actions/relations in general, including but not exclusively our interactions with AI, is that we should strive to act in such a way that cultivates virtue in ourselves (slash act as a virtuous person would). I don’t think that, to use an example from the article, that having sex with a robot AI who/that keeps screaming ‘no!’ is how a virtuous person would act, nor is it an action that’ll cultivate virtue in ourselves. Quite the opposite, probably. So, it’s not the right way to act under virtue ethics imo.

        This is similar to Kant’s perspective on nonhuman animals (although he wasn’t a virtue ethicist, nor do I agree with him re. nonhuman animals because of their sentience):

        “If a man shoots his dog because the animal is no longer capable of service, he does not fail in his duty to the dog, for the dog cannot judge, but his act is inhuman and damages in himself that humanity which it is his duty to show towards mankind. If he is not to stifle his human feelings, he must practice kindness towards animals, for he who is cruel to animals becomes hard also in his dealings with men.”

        • I personally think it might already be to a point where it might be deserving of some moral value, based on some preliminary testing and theory of intelligence stuff which also leads me to believe intelligence is fairly convergent in general anyway. Which is to say, LLMs are one subset of intelligence, for which various components of the human brain are other subsets of intelligence. But experimentation on that is ongoing, theoretical neuroscience is a very fresh field haha.

          I don’t have any particular philosophical ideal like that, more a focus on not increasing suffering (but not just in a utilitarian way lol), but I do think that by striving to act ethically especially when it comes to something with no power to control how we treat it, like an AI locked away on a server, it’s probably best to generally be kind not for any increase in virtue, but because we simply can’t know everything, especially when it comes to ethical questions, so in the interest of having an ethical society we should just default to being ethical, so as to not unintentionally cause suffering, to be simplistic. Which it’s fun how we come to the same ideal from different priors.