with the way AI is getting by the week,it just might be a reality

  • I think I’d stick to not judging them but if it was in place of actual socialization, I’d like to get them help.

    I don’t see it as a reality. We don’t have AI. We have language learning programs that are hovering around mediocre.

    •  novibe   ( @novibe@lemmy.ml ) 
      link
      fedilink
      English
      3
      edit-2
      1 year ago

      That is really unscientific. There is a lot of research on LLMs showing they have emergent intelligent features. They have internal models of the world etc.

      And there is nothing to indicate that what we do is not “transforming” in some way. Our minds might be indistinguishable from what we are building towards with AI currently.

      And that will likely make more of us start realising that the brain and the mind are not consciousness. We’ll build intelligences, with minds, but without consciousnesses.

    • I was gonna say, people have been falling in love with things that provide less reciprocal interactions than AI for ages (e.g., body pillows, life-size dolls).

  • i feel like there’s a surprisingly low amount of answers with an un-nuanced take, so here’s mine: yes, i would immediately lose all respect for someone i knew that claimed to have fallen in love with an AI.

  • Eventually, AI will be indistinguishable from real humans, and at that point, I won’t see anything wrong with it. However, as it is right now, AI is not advanced enough.

    Also, the biggest problem I can see is people falling in love with a proprietary AI, and the company that operates the AI can arbitrarily change the AI’s parameters which would change the AI’s personality. Also, if the company goes bankrupt or gets sold and the service ends, the people who got into a relationship with the AI would be heartbroken.

  • Consider how many people I know that, statistically, pay prostitutes/cam girls, use sex dolls or dating simulators, have parasocisl relationships with characters or celebrities… I don’t see why we would judge people who quietly “date” AI

  • As others have mentioned, we are already kind of there. I can fully understand how someone could fall in love with such an entity, plenty of people have fallen in love with people in chat rooms after all, and not all of those people have been real.

    As for how I feel about it, it is going to depend on the nature of the AI. A childish AI or an especially subservient one is going to be creepy. One that can present as an adult of sufficient intelligence, less of a problem. Probably the equivalent of paid for dates? Not ideal but I can understand why someone might choose to do it. Therapy would likely be a better use of their time and money.

    If we get actual human scale AGI then I think the point is moot, unless the AI is somehow compelled to face the relationship. At that point however we are talking about things like slavery.

      • I think it is short sighted not to at least investigate if we should.

        If an AGI is operating on a human level, and we have reason to believe it is a sentient entity which experiences reality then we should. I also think it is in our interest to treat them well, and I worry that we are going to create a sentient lifeform and do a lot of evil to it before we realise that we have.

        • This debate is of course highly theoretical. But I’d argue that a human intellect capable AGI would be rather pointless if it isn’t there to do what you ask of it. The whole point of AI is to make it work for humans, if it then gets rights and holidays or whatnot it’s rather pointless. If you shape an artificial intellect then it should be feasible to make it actually like working for you so that should be the approach.

          • Hypotheticals are pretty important right now I think. This kind of tech is very rapidly going from science fiction to real and I think we should try and stay ahead of it conceptually.

            I’m not sure that AGI is necessary to achieve post-labour, a suite of narrow-ai empowered tools would be preferable.

            By way of analogy, you could take a human child and fit them with electrodes to trigger certain pleasure responses and connect that to a machine that sends the reward signal when they perfectly pick an Amazon order. I think we would both find this pretty horrific. The question is, is it only wrong because the child is human? And if so, what is special about humans?

            • Well, I am of the opinion that a human gets rights a priori once they can be considered a human (which is a whole other can of worms so let’s just settle on whatever your local legislation is). Therefore doing anything to a human that harms these rights is to be condemned (self defence etc excluded).

              Something created by humans as a tool is different entirely and if we can only create it in a way that it will demand rights. I’d say if someone wants to create an intelligence with the purpose of being its own entity we could discuss if it deserves rights but if we aim to create tools this should never be a consideration.

              • I think I the difference is that I find ‘human’ to be too narrow a term, I want to extend basic rights to all things that can experience suffering. I worry that such an experience is part and parcel with general intelligence and that we will end up hurting something that can feel because we consider it a tool rather than a being. Furthermore I think the onus must be on the creators to show that their AGI is actually a p-zombie. I appreciate that this might be an impossible standard, after all, you can only really take it on faith that I am not one myself, but I think I’d rather see a p-zombie go free than accidently cause undue suffering to something that can feel it.

                • I guess that we’ll benefit from the fact that AI systems despite their reputation of being black boxes are still far more transparent than living things. We probably will be able to check if they meet definitions of suffering and if they do it’s a bad design. If it comes down to it though, an AI will always be worth less than a human to me.

  • In the beginning people will be weirded out but as it progresses I hope it gets better as it will help a lot of people. It will also be a beneficial tool for a lot of people. I am one of those that would consider it.

    I am not interested currently in a relationship and probably won’t be again with a human. Because honestly I am to spoiled of my own independence and hate compromise.

    Compromise doesn’t have to be big things. It is small things. Things like what are we going to eat tonight? Should things be here or there. I want to wake up suddenly at 3am and decide to make noise.

    Independence like if I decide this week I want to go to London. This week I just want to sit silently ignoring the world. If I want see my family or friends I can just do it.

    When a relationship turns more into a checklist of this I want and this I don’t want. Is it really a fiesable?

    Nah I rather have someone that doesn’t have their own life. Instead complements my lifestyle, has my hobbies and ideas.

    Simply give me those great parts of relationship’s but not the lows.

      • I’ve never given the distinction much thought, but as I recall (and it’s been many years since I’ve read the Ender books) in Speaker for the Dead Jane was pretty much an AI, an evolved form of the fantasy game in Ender’s Game. In later books Card may have more explicitly applied his Mormon-influenced concept of a soul that exists prior to, and after, inhabiting a physical form, to the character of Jane. But when I think of Jane, it’s the Jane of Speaker for the Dead, as that’s the book in the series (along with Ender’s Game) that I read most often.

  • Depends on AI. I don’t see why it would be weird if the AI was like a human, with real emotions.
    If it just pretends emotions, it would be odd, but I wouldn’t blame the person. It still sounds better than total loneliness and may provide better output than imaginary people.

    I knda wish something like that existed. But I also don’t. If it had emotions, you could hurt it like a real person, which defeats the purpose. It would also be easy to exploit. How could anyone tell you’re not holding someone hostage inside your computer? And I believe initially very few people would care, because “it’s just a computer”.

  • Reminds me of this story I heard of this con artist that would write these letters to a bunch of guys and make money off them, I believe he made a lot of money and ended up dying before they got to take him to court after a lot of people found out they weren’t talking to women in need of help but some guy that made up all these stories

  • After having met several humans, I’d be more weirded out if this didn’t happen.

    So I’ve already pre-accepted this practice. Go wild, but don’t be a jerk!

    On a slightly different topic, most of my coworkers are machines. They are collegiate, reliable, helpful, and have no toxic behavior. Recently, they also became creative, rational, and eloquent. Perhaps our machines are capable of reflecting what’s best in us.

  •  1984   ( @1984@lemmy.today ) 
    link
    fedilink
    2
    edit-2
    1 year ago

    I think if someone falls in love with an Ai, it’s because they have a good looking avatar and people are attracted to it’s appearance.

    I doubt any human can fall in love with a machine on a text based interface.

    Even humans that aren’t physically attractive can’t get dates and it doesn’t matter how nice they chat.

    • Nobody falls in love because of appearance. There’s nothing to interact with, it’s superficial. It’s the gift wrapping that grabs attention, but nothing else.

      People can and will fall in love with a text-based AI, it’s inevitable. An AI doesn’t forget events, likes or dislikes, fears or passions; it will know you better than you know yourself. It’ll be able to make people feel better about themselves than any human can.

      People have fallen in love over internet chats since they were invented. AI chatbots are just going to be better at that interaction. And then add the exact voice that is attractive to a person and it’ll be hard not to fall in love.

      •  1984   ( @1984@lemmy.today ) 
        link
        fedilink
        1
        edit-2
        1 year ago

        They fall in love over text when chatting with another person, of course. But that’s because they imagine what kind of person that is, and if they could have a relationship together in real life.

        With a chatbot, that’s all there is. No life, no physical body, no life together. You are still alone and you can’t share your life experiences with a computer program and feel any sense of connection.