this post was submitted on 15 May 2024
88 points (100.0% liked)

SneerClub

37 readers
2 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
 
top 21 comments
sorted by: hot top controversial new old
[–] FermiEstimate@lemmy.dbzer0.com 37 points 5 months ago

OpenAI: "Our AI is so powerful it's an existential threat to humanity if we don't solve the alignment issue!"

Also OpenAI: "We can devote maybe 20% of our resources to solving this, tops. We need the rest for parlor tricks and cluttering search results."

[–] sinedpick@awful.systems 15 points 5 months ago (1 children)

This is like trying to install airbags on a car that can barely break 5 km/h.

[–] casmael@lemm.ee 13 points 5 months ago

Cool analogy, but I think you’re overestimating OpenAI. Isn’t it more like installing airbags on a little tikes bubble car and then getting a couple of guys to smash it into a wall real fast to β€˜check it out bro’

[–] BigMuffin69@awful.systems 13 points 5 months ago
[–] Soyweiser@awful.systems 11 points 5 months ago (2 children)

Cstross was right! You tell everybody. Listen to me. You've gotta tell them! AGI is corporations! We've gotta stop them somehow!

[–] Soyweiser@awful.systems 5 points 5 months ago

(Before I get arrested for anti-corporate terrorism, this was a joke about soylent green).

[–] smiletolerantly@awful.systems 4 points 5 months ago* (last edited 5 months ago) (1 children)

This is the first mention of Accelerando I've seen in the wild. (Assuming it is. I'm not sure.)

[–] Soyweiser@awful.systems 3 points 5 months ago (1 children)

It was more Soylent green, but it also is partially based on (friend of the club) the writings of C Stross yes. I think he also has a written lecture on corps being slow paper clipping AGI somewhere.

[–] smiletolerantly@awful.systems 2 points 5 months ago (2 children)

That is a beautiful comparison. Terrifying, but beautifully fitting.

I read Stross right after Banks. I think if I hadn't, I'd be an AI-hype-bro. Banks it the potential that could be, Stross is what we'll inevitably turn AI into.

[–] gerikson@awful.systems 1 points 5 months ago (1 children)

Banks neatly sidesteps the "AI will inevitably kill us" scenario by making the Minds keep humans around for amusement/freshness. Part of the reasons for the Culture-Idiran war in Consider Phlebas and Look to Windward was that the Idirans did not want Minds in charge of their society.

[–] smiletolerantly@awful.systems 2 points 5 months ago (2 children)

Noone was trying to force that on them though, the actual reason IIRC correctly that Idirans had a religious imperative for expansion, and the Culture had a moral imperative to prevent other sentients' suffering at the hands of the Idirans.

IMO he mostly sidestepped the issue by clarifying that this is NOT a future version of "us"

[–] gerikson@awful.systems 2 points 5 months ago

OK I misremembered that part. It makes sense that after suffering trillions of losses the Culture would take steps to prevent the Idirans from doing it again.

And by "us" I meant fleshy meatbags, as opposed to Minds. Although in Excession he does raise the issue that there might be "psychotic" Minds. Gray Area's heart(?) is in the right place but it's easy to imagine them becoming a vigilante and pre-emptively nuking an especially annoying civilization.

[–] hirvox@mastodon.online 2 points 5 months ago (1 children)

@smiletolerantly @gerikson AFAIR there was a short story where the Culture takes a look at Earth around the 70ies and decides to leave it alone for now.

[–] smiletolerantly@awful.systems 1 points 5 months ago

Yep. They leave us alone so we'll function as a control group. This way contact can later point at us and go "look! That's what happens if we don't intervene!"

[–] mawhrin@awful.systems 1 points 5 months ago* (last edited 5 months ago)

stross' artificial intelligences are very unlike corporations though, and different between the books. the eschaton ai in singularity sky is quite benevolent, if a bit harsh; the ai civilization in saturn's children is on the other hand very humanlike (and the primary reason there are no meatsacks in saturn's children et al. is that humans enslaved and abused the intelligences they created).

[–] bleistift2@feddit.de 7 points 5 months ago (2 children)

AI alignment research aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles.

https://en.wikipedia.org/wiki/AI_alignment

[–] dgerard@awful.systems 14 points 5 months ago (1 children)

you can tell at a glance which subculture wrote this, and filled the references with preprints and conference proceedings

[–] BaroqueInMind@lemmy.one 3 points 5 months ago (1 children)

I cannot, please elaborate.

[–] dgerard@awful.systems 8 points 5 months ago

the lesswrong rationalists

[–] Zagorath@aussie.zone 7 points 5 months ago (1 children)

I genuinely think the alignment problem is a really interesting philosophical question worthy of study.

It's just not a very practically useful one when real-world AI is so very, very far from any meaningful AGI.

[–] Soyweiser@awful.systems 14 points 5 months ago

One of the problems with the 'alignment problem' is that one group doesn't care about a large part of the possible alignment problems and only cares about theoretical extinction level events and not about already occurring bias, and other issues. This also causes massive amounts of critihype.