this post was submitted on 08 Jul 2023
25 points (100.0% liked)
Asklemmy
1454 readers
131 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'd want a familiar/daemon that was running an AI personality to act as a personal assistant, friend and interactive information source. It could replace therapy and be a personalized tutor, and it would always be up to date on the newest science and global happenings.
I honestly think that with an interesting personality, most people would drastically reduce their Internet usage in favor of interacting with the AGI. It would be cool if you could set the percentage of humor and other traits, similar to the way it's done with TAR in the movie Interstellar.
Exactly! I think mental health issues would be reduced drastically if everyone had a devoted friend for support at all times.
Things like misinformation and radicalization would go down too, if the AI always had global context for everything.
That's possible now. I've been working on such a thing for a bit now and it can generally do all that, though I wouldn't advise it to be used for therapy (or medical advice), but mostly for legal reasons rather than ability. When you create a new agent, you can tell it what type of personality you want. It doesn't just respond to commands but also figures out what needs to be done and does it independently.
Yeah I haven't played with it much but it feels like ChatGPT is already getting pretty close to this kind of functionality. It makes me wonder what's missing to take it to the next level over something like Siri or Alexa. Maybe it needs to be more proactive than just waiting for prompts?
I'd be interested to know if current AI would be able to recognize the symptoms of different mental health issues and utilize the known strategies to deal with them. Like if a user shows signs of anxiety or depression, could the AI use CBT tools to conversationally challenge those thought processes without it really feeling like therapy? I guess just like self-driving cars this kind of thing would be legally murky if it went awry and it accidentally ended up convincing someone to commit suicide or something haha.
That last bit already happened. An AI (allegedly) told a guy to commit suicide and he did. A big part of the problem is while GPT4 for instance knows all about all the things you just said and can probably do what you're suggesting, nobody can guarantee it won't get something horribly wrong at some point. Sort of like how self driving cars can handle like 95% of things correctly but that 5% of unexpected stuff that maybe takes some extra context that a human has and the car was never trained on is very hard to get past.
Thanks for the link, that sounds like exactly what I was asking for but gone way wrong!
What do you think is missing to prevent these kinds of outcomes? Is AI simply incapable of categorizing topics as 'harmful to humans' on it's own without a human's explicit guidance? It seems like the philosophical nuances of things like consent or dependence or death would be difficult for a machine to learn if it isn't itself sensitive to them. How do you train empathy in something so inherently unlike us?