this post was submitted on 16 Jul 2023
3 points (100.0% liked)

SneerClub

37 readers
11 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
top 10 comments
sorted by: hot top controversial new old
[–] lobotomy42@awful.systems 5 points 1 year ago

A choice quote from the comments:

I am not as gifted at persuasive writing as (say) Eliezer,

Don't sell yourself short, kid.

[–] blakestacey@awful.systems 3 points 1 year ago (1 children)
[–] self@awful.systems 2 points 1 year ago (1 children)

Mikhail Yagudin ($28,000): Giving copies of Harry Potter and the Methods of Rationality to the winners of EGMO 2019 and IMO 2020

I’m in the wrong line of work. were these fucking gold plated? did they give every recipient 1024 copies as a symbol of the simulated torture they’ve earned? am I going to find a copy of HPMOR in the nightstand at the next cheap hotel I stay at?

[–] gerikson@awful.systems 2 points 1 year ago

JFC what a crappy prize.

EGMO 2019 is apparently European Girl's Mathematical Olympiad (2019's venue was Kyiv, Ukraine)

Despite the name it seems to be an international competition, with participants from the US, KSA, Peru and Mexico: https://www.egmo.org/egmos/egmo8/scoreboard/

If we defined "winners" to be those with gold medals, it's unclear to me whether the participants can read HPMOR in English - there are 3 winners from the US, 1 from the UK, and 3 from Latin America.

[–] swlabr@awful.systems 2 points 1 year ago (1 children)

Why did the Alignment community not prepare tools and plans for convincing the wider infosphere about AI safety years in advance?

Did you not read HPMOR, the greatest story ever reluctantly told to reach the wider infosphere about rationalism and, by extension, AI alignment????

Why were there no battle plans in the basement of the pentagon that were written for this exact moment?

It's almost like AGI isn't a credible threat!

Heck, 20+ years is enough time to educate, train, hire and surgically insert an entire generation of people into key positions in the policy arena specifically to accomplish this one goal like sleeper cell agents. Likely much, much, easier than training highly qualified alignment researchers.

At MIRI, we don't do things because they are easy. We don't do things because we are grifters.

Didn't we pretty much always know it was going to come from one or a few giant companies or research labs? Didn't we understand how those systems function in the real world? Capitalist incentives, Moats, Regulatory Capture, Mundane utility, and International Coordination problems are not new.

This is how they look at all other problems in the world, and it's fucking exasperating. Climate change? I would simply implement 'Capitalist Incentives'. Wealth inequality? Have you tried a 'Moat'? Racism? It sounds like a job for 'Regulatory Capture'. Yes, all problems are easily solvable with 200 IQ and buzzwords. All problems except the hardest problem in the world, preventing Skynet from being invented. Ignore all those other problems; someone will 'Mundane Utility' them away. For now, we need your tithe; we're definitely going to use it for 'International Coordination', by which I totally don't mean buying piles of meth and cocaine for our orgies.

Why was it not obvious back then? Why did we not do this? Was this done and I missed it?

We tried nothing and we're all out of ideas!

[–] dgerard@awful.systems 1 points 1 year ago* (last edited 1 year ago)

buying piles of anime for aella's masked naked parties full of querulous discussion

fixed

[–] Evinceo@awful.systems 1 points 1 year ago* (last edited 1 year ago)

They're so enamored with the individualism of a lone genius org coming up with the solution all on its own and so opposed to any for of collective solution requiring trust (because of their troubled childhoods?) that the only acceptable thing has to be a technical wizbang solution. Only now enough eyeballs have seen the problem and realized the obvious: no such wizbang solution can exist (if there's even a well defined problem!)

[–] bitofhope@awful.systems 1 points 1 year ago (1 children)

It makes me feel better that at least they don't feel like they got too much attention and credibility, because I sure do.

[–] Evinceo@awful.systems 0 points 1 year ago (1 children)

No amount of attention or credibility would ever be enough.

[–] dgerard@awful.systems 1 points 1 year ago

or, conversely, warranted