this post was submitted on 09 Oct 2023
38 points (100.0% liked)
Chat
7498 readers
2 users here now
Relaxed section for discussion and debate that doesn't fit anywhere else. Whether it's advice, how your week is going, a link that's at the back of your mind, or something like that, it can likely go here.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I was considering a sci-fi concept in which a super-intelligence orders of magnitude more aware than any human comes to be. It has sufficient resources to give specific instructions to all humans at any given point in time which will always lead to optimal consequences for that person if followed, although the person may choose not to follow the directions and have things go poorly. When all humans fall in line and are essentially living in a utopia from their perspective, the AI is pooling the results of their behavior to collect resources to enhance itself. Eventually, it transcends physical existence and discards the human race like trash, and almost all humans perish in mass famine and diseases. The remaining humans develop for another few thousand years and, having learned nothing, start working on making another AI to replace the last one.
It's a nice fantasy, but the reality is the people who have the resources to make the AI are wiring it to further concentrate wealth and power in their own hands, and anything that develops from that will almost inevitably have that at its core. I will believe a paperclips scenario or skynet far more readily than an AI that achieves hypersentience and somehow comes to the conclusion that the most fruitful course of action is to make the greedy violent hairless apes, the ones that fucked up the planet for every other living creature, as happy and comfortable as possible.
This is why the singularity needs to be open sourced.