this post was submitted on 16 Sep 2023
5 points (100.0% liked)

Futurology

19 readers
1 users here now

A Kbin community for discussing and sharing visions of the future, from science fiction to realistic predictions. Explore the possibilities and challenges of emerging technologies, social changes, environmental issues, and more. Join us in imagining the future and shaping it together.

founded 1 year ago
 

Advances in artificial intelligence have prompted extensive public concern about its capacity to contribute to the spread of misinformation, bias, and cybersecurity breaches—and its potential existential threat to humanity. But, if anything, AI can aid human beings in making decisions aimed at improving social equality, safety, productivity—and mitigate some existential threats.

top 3 comments
sorted by: hot top controversial new old
[–] insomniac_lemon@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

I want them to put my brain into a machine just so I do not need to be an aggressor in the upcoming humanity vs. AI conflicts. Maybe I can even be a mediator or something.

Well that'd probably be 200yrs+ from now (anyone want to cryo a random brain for testing?), and I can see the real issue just being mistreatment of transhuman individuals (particularly full-conversion cyborgs) because they appear too robotic (or too "connected", or even just "not respectable" similar to long hair or tattoos/piercings/certain aesthetics etc).

[–] Pons_Aelius@kbin.social 2 points 1 year ago (1 children)

the upcoming humanity vs. AI conflicts.

SF is not real and it certainly is not prediction.

All the endless aliens are going to invade and AI is going to kill us stories are a reflection on humanity and our inherent distrust of the outsider due to being a tribal species.

The real fear I have is humans believing AI to be intelligent and sentient when it is not and seeding to much control to these systems.

[–] insomniac_lemon@kbin.social 1 points 1 year ago* (last edited 1 year ago)

The real fear I have is humans believing AI to be intelligent and sentient when it is not and seeding to much control to these systems.

That is not only a common type (or interpretation) of story conflict, there are multiple news stories where that is already happening, because...

All the endless aliens are going to invade and AI is going to kill us stories are a reflection on humanity and our inherent distrust of the outsider due to being a tribal species.

With AI the point can be made that it is whatever it has been trained to do, including unintentionally. Microsoft Tay is one example. Although

SF is not real and it certainly is not prediction

TBC my comment was not about belief in any prediction, conflict can mean anything. For some context, this is something I watched recently: The Tragedy of Droids in Star Wars. (and a slightly shorter video I found while editing this comment)

And the second half of my comment was in a similar vein, basically that people will also be on the receiving end of robot mistreatment (insert "'course I do, I'm part Robot" scene with Cyborg (TT, 2016), and yes the subtext). Just as all things living (also, communities and infrastructure) ultimately will suffer in their own ways as a direct result of the even worse suffering, as happens this way today. Short-term profits over anything else.