this post was submitted on 06 Nov 2023
32 points (100.0% liked)

Asklemmy

1454 readers
48 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

with the way AI is getting by the week,it just might be a reality

top 34 comments
sorted by: hot top controversial new old
[–] tacosanonymous@lemm.ee 20 points 1 year ago (2 children)

I think I’d stick to not judging them but if it was in place of actual socialization, I’d like to get them help.

I don’t see it as a reality. We don't have AI. We have language learning programs that are hovering around mediocre.

[–] novibe@lemmy.ml 3 points 1 year ago* (last edited 1 year ago)

That is really unscientific. There is a lot of research on LLMs showing they have emergent intelligent features. They have internal models of the world etc.

And there is nothing to indicate that what we do is not “transforming” in some way. Our minds might be indistinguishable from what we are building towards with AI currently.

And that will likely make more of us start realising that the brain and the mind are not consciousness. We’ll build intelligences, with minds, but without consciousnesses.

[–] yuunikki@lemmy.dbzer0.com 3 points 1 year ago

what if they were so socially introverted that the AI is all they could handle?

[–] GammaGames 14 points 1 year ago (2 children)

You don’t have to imagine. It’s already happening and, yes, it’s weird.

[–] sim_ 3 points 1 year ago

I was gonna say, people have been falling in love with things that provide less reciprocal interactions than AI for ages (e.g., body pillows, life-size dolls).

load more comments (1 replies)
[–] sculd 8 points 1 year ago

People will fall in love with AI because AI does not reject human. That doesn't mean AI will love them back or even understand what love means.

[–] burgers@toast.ooo 8 points 1 year ago (1 children)

i feel like there's a surprisingly low amount of answers with an un-nuanced take, so here's mine: yes, i would immediately lose all respect for someone i knew that claimed to have fallen in love with an AI.

[–] yuunikki@lemmy.dbzer0.com 4 points 1 year ago

Dang, that's pretty judgemental

[–] MrFunnyMoustache@lemmy.ml 4 points 1 year ago (1 children)

Eventually, AI will be indistinguishable from real humans, and at that point, I won't see anything wrong with it. However, as it is right now, AI is not advanced enough.

Also, the biggest problem I can see is people falling in love with a proprietary AI, and the company that operates the AI can arbitrarily change the AI's parameters which would change the AI's personality. Also, if the company goes bankrupt or gets sold and the service ends, the people who got into a relationship with the AI would be heartbroken.

[–] lightnsfw@reddthat.com 2 points 1 year ago

Not much different than most of the relationships I've been in then

[–] neptune@dmv.social 4 points 1 year ago

Consider how many people I know that, statistically, pay prostitutes/cam girls, use sex dolls or dating simulators, have parasocisl relationships with characters or celebrities... I don't see why we would judge people who quietly "date" AI

[–] gullible@kbin.social 4 points 1 year ago

Dating sims already exist. I imagine there’s massive overlap between people’s views on dating sims and virtual SOs. Generally negative sentiment.

Bro have you heard of Replika?

[–] Prouvaire@kbin.social 3 points 1 year ago (1 children)

When I was younger I had a crush on Jane from Speaker for the Dead, so I wouldn't be weirded out by that person, cause I'd probably be that person. 😅

[–] ThisIsAManWhoKnowsHowToGling@lemmy.dbzer0.com 1 points 1 year ago (1 children)

But Jane isn't technically an AI, she's more like a Boltzman Brain if anything. She technically doesn't even run on the computers she is connected to.

[–] Prouvaire@kbin.social 1 points 1 year ago

I've never given the distinction much thought, but as I recall (and it's been many years since I've read the Ender books) in Speaker for the Dead Jane was pretty much an AI, an evolved form of the fantasy game in Ender's Game. In later books Card may have more explicitly applied his Mormon-influenced concept of a soul that exists prior to, and after, inhabiting a physical form, to the character of Jane. But when I think of Jane, it's the Jane of Speaker for the Dead, as that's the book in the series (along with Ender's Game) that I read most often.

In the beginning people will be weirded out but as it progresses I hope it gets better as it will help a lot of people. It will also be a beneficial tool for a lot of people. I am one of those that would consider it.

I am not interested currently in a relationship and probably won’t be again with a human. Because honestly I am to spoiled of my own independence and hate compromise.

Compromise doesn’t have to be big things. It is small things. Things like what are we going to eat tonight? Should things be here or there. I want to wake up suddenly at 3am and decide to make noise.

Independence like if I decide this week I want to go to London. This week I just want to sit silently ignoring the world. If I want see my family or friends I can just do it.

When a relationship turns more into a checklist of this I want and this I don’t want. Is it really a fiesable?

Nah I rather have someone that doesn’t have their own life. Instead complements my lifestyle, has my hobbies and ideas.

Simply give me those great parts of relationship’s but not the lows.

[–] peto@lemm.ee 3 points 1 year ago (1 children)

As others have mentioned, we are already kind of there. I can fully understand how someone could fall in love with such an entity, plenty of people have fallen in love with people in chat rooms after all, and not all of those people have been real.

As for how I feel about it, it is going to depend on the nature of the AI. A childish AI or an especially subservient one is going to be creepy. One that can present as an adult of sufficient intelligence, less of a problem. Probably the equivalent of paid for dates? Not ideal but I can understand why someone might choose to do it. Therapy would likely be a better use of their time and money.

If we get actual human scale AGI then I think the point is moot, unless the AI is somehow compelled to face the relationship. At that point however we are talking about things like slavery.

[–] lol3droflxp@kbin.social 2 points 1 year ago (1 children)

You’d give am AGI human rights?

[–] peto@lemm.ee 3 points 1 year ago (1 children)

I think it is short sighted not to at least investigate if we should.

If an AGI is operating on a human level, and we have reason to believe it is a sentient entity which experiences reality then we should. I also think it is in our interest to treat them well, and I worry that we are going to create a sentient lifeform and do a lot of evil to it before we realise that we have.

[–] lol3droflxp@kbin.social 3 points 1 year ago (1 children)

This debate is of course highly theoretical. But I’d argue that a human intellect capable AGI would be rather pointless if it isn’t there to do what you ask of it. The whole point of AI is to make it work for humans, if it then gets rights and holidays or whatnot it’s rather pointless. If you shape an artificial intellect then it should be feasible to make it actually like working for you so that should be the approach.

[–] peto@lemm.ee 2 points 1 year ago (1 children)

Hypotheticals are pretty important right now I think. This kind of tech is very rapidly going from science fiction to real and I think we should try and stay ahead of it conceptually.

I'm not sure that AGI is necessary to achieve post-labour, a suite of narrow-ai empowered tools would be preferable.

By way of analogy, you could take a human child and fit them with electrodes to trigger certain pleasure responses and connect that to a machine that sends the reward signal when they perfectly pick an Amazon order. I think we would both find this pretty horrific. The question is, is it only wrong because the child is human? And if so, what is special about humans?

[–] lol3droflxp@kbin.social 3 points 1 year ago (1 children)

Well, I am of the opinion that a human gets rights a priori once they can be considered a human (which is a whole other can of worms so let’s just settle on whatever your local legislation is). Therefore doing anything to a human that harms these rights is to be condemned (self defence etc excluded).

Something created by humans as a tool is different entirely and if we can only create it in a way that it will demand rights. I’d say if someone wants to create an intelligence with the purpose of being its own entity we could discuss if it deserves rights but if we aim to create tools this should never be a consideration.

[–] peto@lemm.ee 1 points 1 year ago (1 children)

I think I the difference is that I find 'human' to be too narrow a term, I want to extend basic rights to all things that can experience suffering. I worry that such an experience is part and parcel with general intelligence and that we will end up hurting something that can feel because we consider it a tool rather than a being. Furthermore I think the onus must be on the creators to show that their AGI is actually a p-zombie. I appreciate that this might be an impossible standard, after all, you can only really take it on faith that I am not one myself, but I think I'd rather see a p-zombie go free than accidently cause undue suffering to something that can feel it.

[–] lol3droflxp@kbin.social 2 points 1 year ago

I guess that we’ll benefit from the fact that AI systems despite their reputation of being black boxes are still far more transparent than living things. We probably will be able to check if they meet definitions of suffering and if they do it’s a bad design. If it comes down to it though, an AI will always be worth less than a human to me.

[–] FunkyMonk@kbin.social 2 points 1 year ago

I mean if it was as much people as the movie I would be more worry about Skynetting as the AI got powerful enough to flirt with like.... a very large populous.

[–] 1984@lemmy.today 2 points 1 year ago* (last edited 1 year ago) (1 children)

I think if someone falls in love with an Ai, it's because they have a good looking avatar and people are attracted to it's appearance.

I doubt any human can fall in love with a machine on a text based interface.

Even humans that aren't physically attractive can't get dates and it doesn't matter how nice they chat.

[–] rgb3x3 1 points 1 year ago (1 children)

Nobody falls in love because of appearance. There's nothing to interact with, it's superficial. It's the gift wrapping that grabs attention, but nothing else.

People can and will fall in love with a text-based AI, it's inevitable. An AI doesn't forget events, likes or dislikes, fears or passions; it will know you better than you know yourself. It'll be able to make people feel better about themselves than any human can.

People have fallen in love over internet chats since they were invented. AI chatbots are just going to be better at that interaction. And then add the exact voice that is attractive to a person and it'll be hard not to fall in love.

[–] 1984@lemmy.today 1 points 1 year ago* (last edited 1 year ago)

They fall in love over text when chatting with another person, of course. But that's because they imagine what kind of person that is, and if they could have a relationship together in real life.

With a chatbot, that's all there is. No life, no physical body, no life together. You are still alone and you can't share your life experiences with a computer program and feel any sense of connection.

[–] variants@possumpat.io 2 points 1 year ago

Reminds me of this story I heard of this con artist that would write these letters to a bunch of guys and make money off them, I believe he made a lot of money and ended up dying before they got to take him to court after a lot of people found out they weren't talking to women in need of help but some guy that made up all these stories

[–] Saigonauticon@voltage.vn 2 points 1 year ago

After having met several humans, I'd be more weirded out if this didn't happen.

So I've already pre-accepted this practice. Go wild, but don't be a jerk!

On a slightly different topic, most of my coworkers are machines. They are collegiate, reliable, helpful, and have no toxic behavior. Recently, they also became creative, rational, and eloquent. Perhaps our machines are capable of reflecting what's best in us.

[–] user224@lemmy.sdf.org 2 points 1 year ago

Depends on AI. I don't see why it would be weird if the AI was like a human, with real emotions.
If it just pretends emotions, it would be odd, but I wouldn't blame the person. It still sounds better than total loneliness and may provide better output than imaginary people.

I knda wish something like that existed. But I also don't. If it had emotions, you could hurt it like a real person, which defeats the purpose. It would also be easy to exploit. How could anyone tell you're not holding someone hostage inside your computer? And I believe initially very few people would care, because "it's just a computer".

[–] Monument@lemmy.sdf.org 1 points 1 year ago* (last edited 1 year ago)

Depends, I guess. I feel that our capacity to be horrible outweighs our ability to handle it well.

The movie’s AI is a fully present consciousness that exerts its own willpower. The movie also doesn’t have microtransactions, subscriptions, or as far as I can tell, even a cost to buy the AI.
That seems fine. Sweet, even.

But I think the first hurdle is whether or not an AI is more a partner than base sexual entertainment. And next (especially under capitalism), are those capable of harnessing the resources to create a general AI also willing to release it for free, or would interaction be transactional?
If it’s transactional, then there’s intent - was it built for love, or was that part an accident? If it was built for love and there’s transactions, there’s easy potential for abuse. (Although abusive to which party, I couldn’t say.)

And if, say, the AI springs forth from a FOSS project, who makes sure things stay “on the level” when folks tweak the dataset?
A personalized set of training data from a now-deceased spouse is very different than hacked social media data, or other types of tweaks bad actors could make.

[–] CanadaPlus@lemmy.sdf.org 1 points 1 year ago

If they are aware of what the AI's perspective is, and the AI itself isn't in distress somehow, then it's not really my business, is it? If they don't realise it's just coded to like them then I might feel the need to bust their bubble.

I wonder, when will we make an AI that can dump you?