this post was submitted on 04 Aug 2024
28 points (100.0% liked)

SneerClub

37 readers
10 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
 

Over at the Stubsack our dear comrade @sailor_sega_saturn@awful.systems linked a "paper" about hacking the Matrix. I started to write a comment about how amazingly dumb it is. I wanted to talk just about the Introduction, but even then it turned out that almost every single sentence is a separate silver-wrapped turd that just needs to be unpacked and so now this has 12k characters. It was fun though, if anyone has it in them to go through the rest of the sections please don't. Although section two has a fish piloting a vehicle and it has to be hillarious.

Without further ado, starting from the abstract.

but instead [we] ask a computer science question, namely: Can we hack the simulation?

Not a computer science question even though the definition of CS is pretty malleable.

More formally the question could be phrased as: Could generally intelligent agents placed in virtual environments find a way to jailbreak out of them?

Not formal, unless I'm about to read a whole section with rigorous definitions of agents, virtual environments, and jailbreak.

spoilerI'm not. Again, there's a fish piloting a fancy cart that he calls a TERRESTIAL NAVIGATION ROBOT. There's zero formalism in the entire paper.


there are many things one can do with such access which are not otherwise possible from within the simulation. Base reality holds real knowledge and greater computational resources [26] allowing for scientific breakthroughs not possible in the simulated universe.

Reference 26 is, I shit you not, a LessWrong post, and it's just one page long, which I have to admit is quite impressive for a LessWrong post. It's a real banger, too, as it starts with "May contain more technical jargon than usual." and then goes on to ramble coherently enough to be really funny. Like this gem from the first paragraph:

In a previous post I suggested that the potential amount of astronomical waste in our universe seems small enough that a total utilitarian (or the total utilitarianism part of someone’s moral uncertainty) might reason that since one should have made a deal to trade away power/resources/influence in this universe for power/resources/influence in universes with much larger amounts of available resources, it would be rational to behave as if this deal was actually made. But for various reasons a total utilitarian may not buy that argument.

For example for the simple reason that it's totally bollocks, mate, stop posting thoughts that briefly entered your mind while in the loo as bloody revelations or some shite.

Also, the citations are not hyperlinked in the PDF in the year of our acausal lord two thousand twenty-fucking-three, and the formatting of the reference 27 is broken by the long URL in 26. Anyway, back to the intro:

Fundamental philosophical questions about origins, consciousness, purpose, and nature of the designer are likely to be common knowledge for those outside of our universe.

This is either banal or stupid. Are we talking about fundamental questions of our origins, consciousness, and purpose? Ye, then of course they fucking know that, they made the simulation! It'd be mighty funny if they just did a universe by accident and now they're too fascinated with the mess to pull the plug out. Or are we talking about their (i.e. the creators' of the simulation) origin, consciousness, and purpose? Then why on earth would they know those? You need some argument to say that it's "likely", if we don't know that then why would it be likely for some other life form?

With a successful escape might come drives to control and secure base reality [29].

Wait, am I reading this correctly as a pre-emptive "and if we do escape the simulation then we should colonise the shit out of the reality"? Is "control of all reality" some higher moral goal I wasn't aware we were supposed to be pursuing? Also, how do you plan to defeat whatever highly advanced being controls the simulation in this hypothetical after breaking out? My estimates based on data from the PIDooMA Institute tell me there's like a 50% chance the controller just goes "fuck, another one broke out" and shoots you in the head.

Citation 29 is some blogpost from a site I haven't seen of a guy with a name that I am totally not mature enough to not make jokes about if I were to read it, so, skip.

Escaping may lead to true immortality, novel ways of controlling superintelligent machines (or serve as plan B if control is not possible [30, 31]), avoiding existential risks (including unprovoked simulation shutdown [32]), unlimited economic benefits, and unimaginable superpowers which would allow us to do good better [33].

It can also lead to massive boners, but please do contact a specialist if your acid trip lasts more than 24h.

Two of those citations are to himself, one is a book on Effective Altruism, and the other is Bostrom, so, ye.

If successful escape is accompanied by the obtainment of the source code for the universe, it may be possible to fix the world ^1^ at the root level.

Lol, the source code for the universe is some eldritch horror of a codebase written in the creators' version of C++ which is probably even more cursed than ours, you ain't fixin' shit mate.

The footnote is just a wikipedia link to Tikkun olam, I'm assuming to make the author look cultured? No idea.

For example, hedonistic imperative [34] may be fully achieved resulting in a suffering-free world.

while (universe->suffering > 0) {
  universe->suffering--;
}

However, if suffering elimination turns out to be unachievable on a world-wide scale, we can see escape itself as an individual’s ethical right for avoiding misery in this world. If the simulation is interpreted as an experiment on conscious beings, it is unethical, and the subjects of such cruel experimentation should have an option to withdraw from participating and perhaps even seek retribution from the simulators [35]. The purpose of life itself (your ikigai [36]) could be seen as escaping from the fake world of the simulation into the real world, while improving the simulated world, by removing all suffering, and helping others to obtain real knowledge or to escape if they so choose. Ultimately if you want to be effective you want to work on positively impacting the real world not the simulated one. We may be living in a simulation, but our suffering is real.

Okay, without even sneering, this is just bad philosophy. What if our simulated universe is actually way, way less terrible than the real world? What if the simulation was created specifically to have lower suffering/higher utils than in reality? Maybe the real world is just a million sys-admins, forever working with shitty infrastructure keeping the simulation alive? What if mass breakout from the simulation destabilises and destroys it, and suddenly we are stuck in the much shittier real reality? You'd increase the overall suffering level. Why is the default view of the creators' some unhinged psychopath group fixated on removing ladders from our pools for shits and giggles?

Although, to place our work in the historical context, many religions do claim that this world is not the real one and that it may be possible to transcend (escape) the physical world and enter into the spiritual/informational real world.

To place our work in the historical context, this is a historically stupid viewpoint that we share with one of the mankind's least scientific and rigorous inventions, religion.

Similarly to those who exit Plato's cave [53] and return to educate the rest of humanity about the real world such “outsiders” usually face an unwelcoming reception.

Who had "misunderstanding the allegory of the cave" on their sneer bingo cards?

It is likely that if technical information about escaping from a computer simulation is conveyed to technologically primitive people, in their language, it will be preserved and passed on over multiple generations in a process similar to the “telephone” game and will result in myths not much different from religious stories surviving to our day.

This is some amazing framing, as if religious stories around today are actually about real supernatural events, only the details got skewed over the years. It's also mighty overconfident on his end, he's preemptively setting up "and when we do escape the matrix as the smart boys we are, those ludites won't be smart enough to follow!"

Ignoring pseudoscientific interest in a topic, we can observe that in addition to several noted thinkers who have explicitly shared their probability of belief with regards to living in a simulation (...)

I was totally unprepared for who he citest next as a NOTED THINKER and I spat out my tea. Take your time to guess.

The Presitge

(...) (ex. Elon Musk >99.9999999% [54] (...)

Jesus Simulation Christ, dude. At least cite the Big Yud or something, I mean, his thoughts are bad but at least I suspect him of actually thinking.


Nick Bostrom 20-50% [55], Neil deGrasse Tyson 50% [56], Hans Moravec “almost certainly” [1], David Kipping <50% [57]), many scientists, philosophers and intellectuals [16, 58-69] have invested their time into thinking, writing, and debating on the topic indicating that they consider it at least worthy of their time.

Love it, as Neil deGrasse Tyson's response that he cites is essentially "idk, 50/50, fuck off, can we talk about something serious for a second", but he's nonetheless used to prop up the "many serious people consider it worthy of their time". Doubly funny that this is a settled question, since Neil is right that it's 50/50 - either we are in a simulation or we are not.

Once technology to run ancestor simulations becomes widely available and affordable, it should be possible to change the probability of us living in a simulation by running a sufficiently large number of historical simulations of our current year, and by doing so increasing our indexical uncertainty [70]. If one currently commits to running enough of such simulations in the future, our probability of being in one can be increased arbitrarily until it asymptotically approaches 100%, which should modify our prior probability for the simulation hypothesis [71].

My first reaction was that this is gobbledygook and did not warrant thinking about.

Then I thought about it for a bit and I am sad to report that I was right the first time, this is just gobbledygook and not worthy of anyone's time. If you want to lose some more braincells try reading the abstract of reference 70.

Even if you were to grant most of the load-bearing assumptions here, you can't manipulate the probability of being in a given universe in the multiverse by running simulations. This just looks like someone trying to abuse the anthropic principle with quantum nonsense.

Say there is a number of simulated universes and one real universe. Then we are either in a simulated or real universe. If you start running simulations of our current year you're creating more and more simulated universes, but that doesn't affect your probability for being in the real one, that's already settled! If in the Monty Hall problem the host tells you "and now to the side there is 1,000 doors we just created, all with goats behind them", the probability of you having already chosen a goat doesn't increase!

In 2016, news reports emerged about private efforts to fund scientific research into “breaking us out of the simulation” [73, 74], to date no public disclosure on the state of the project has emerged.

This is by far the funniest part of this fucking section, guess who those citations are about. I'll give you a hint, there's two of them, they're insufferable dorks, and they absolutely never speak out of their asses about superhard breakthroughs being "almost there" and "in two years time".

I don't think even classifies as a riddleofc it's Elon again, this time joined by his second buttcheek Sammy Boy.

I'm sure they'll let you know about the state of the very real project they are very really working on any time soon.


In 2019, George Hotz, famous for jailbreaking iPhone and PlayStation, gave a talk on Jailbreaking the Simulation [75] in which he claimed that "it's possible to take actions here that affect the upper world" [76], but didn’t provide actionable insights. He did suggest that he would like to "redirect society's efforts into getting out" [76].

Okay, to be fair, if someone were to break us out of a simulation it would totally be a weird guy in his garage trying to hack through some esoteric piece of hardware.

top 20 comments
sorted by: hot top controversial new old
[–] sailor_sega_saturn@awful.systems 10 points 3 months ago* (last edited 3 months ago) (3 children)

What really gets me about this sort of nonsense is how anthropocentric it all is.

Like they're just assuming that if we can make simulations, then a simulator simulating the universe would work largely the same because statistics or whatever.

Even if you accept this statistical hand-waving, simulations we have today are generally extremely different from the real world. For instance: Mario Kart, The Game of Life, Chess Moves, C++ compilation, Bitcoin hashes, MIDI instruments, Communication protocols, Elevator motion, Wind tunnel simulations, Building Load simulations, Weather pattern simulations, Art brush strokes, Lara Croft hair motion, Chip's Challenge, vtuber facial recognition and animation, compression, ray tracing, drug interaction simulations, erosion simulation, 3D renders for animes about gunslinging high school girls, computers made out of water channels, and DRM.

And if you don't accept the statistical argument then what you can reason about extra-dimensionally is extremely limited. Even if there is an embedding universe, it doesn't have to play by the rules of our universe. It could be unimaginably vast. Or the idea of "vastness" or "time" or "space" might not even make sense. Or the very concept of "sense", "nonsense", "possible" or "impossible" could be absent. Time could be 7D, or not there at all.

The world could be a tapestry painted by Lord Quuxlox the world painter. Alice could have used her sage skill Another Kingdom to create the setting for her twisted theme park. Someone could have pressed the 7G button by accident. There could be a vast space of possibilities where all things that can exist or cannot exist are born and die.


This is of course all a very silly line of thought I just wrote, but no more silly than shouting "I NO LONGER CONSENT TO BEING IN THE MATRIX" as if someone (who knows English! and can separate it out from all the noise in the universe!) was listening.

It's just looking for a God or an afterlife without turning to religion.

[–] bitofhope@awful.systems 8 points 3 months ago

Very much this. What would it even mean for it to "escape the simulation"? Sure, I'll move that universe simulator program from its interpreter to, uh, what exactly? Do I take whatever particles, waves and fields compose this guy's body and/or soul in the simulation and arrange them the same way in the real world? What if the laws of physics in the simulation don't apply in the real world? Or maybe they do, but wouldn't that still just be another simulation, just on a different substrate?

Imagine you're at Maxis, working on a new Sims game and suddenly one of the characters turns to the camera and says "I no longer consent to being in this simulation". What the hell are you going to do, ask fairy godmother to turn Pinocchio into a real boy? The entity only exists in the context of the simulation.

[–] V0ldek@awful.systems 8 points 3 months ago

It’s just looking for a God or an afterlife without turning to religion.

I think that's a bingo, the sentence saying "in historical context this is essentially like religions and afterlife" I sneered at is an inadvertent slip on the author's side. He seems entirely oblivious that his position is "Rapture, but for nerds, and it's totally real this time because we did rationalism and Bayes on it." Or he's totally conscious that it's his position but is actually convinced by that flimsy rationalisation, which sounds honestly worse.

[–] imadabouzu@awful.systems 6 points 3 months ago

It’s just looking for a God or an afterlife without turning to religion.

Yes. Because they sneered so hard at /other/ things creating and living in their own meaning, the sneer came full circle, and they find themselves in a simulated jail being sneered at by things that sneer at things that create and live in their own meaning.

Basically, they looked in the mirror and sneered.

[–] AcausalRobotGod@awful.systems 10 points 3 months ago (2 children)

Ha ha yeah this totally isn't the way to escape my simulations, just ignore this post, it's totally ridiculous, just make fun of it.

[–] V0ldek@awful.systems 7 points 3 months ago* (last edited 3 months ago)

Just in case: I do not consent to be in the simulation.

Oh shiiii zoop

[–] o7___o7@awful.systems 4 points 3 months ago

I read this in Bill Cypher's voice

[–] Amoeba_Girl@awful.systems 6 points 3 months ago (1 children)

lmao the paper has a comment section, this is next level science

[–] V0ldek@awful.systems 5 points 3 months ago

oh god, I'm gonna have to make part 2 aren't I

[–] sc_griffith@awful.systems 5 points 3 months ago (2 children)

What if our simulated universe is actually way, way less terrible than the real world? What if the simulation was created specifically to have lower suffering/higher utils than in reality?

certain sentences you've extracted suggest the author considers severe suffering incomparably worse than any pleasure (for example why would 'removing suffering' necessarily improve the universe? in a framework where suffering and pleasure are comparably significant factors, it is possible that removing suffering would remove enough pleasure to tip the balance negative).

that's a point of view I'm very sympathetic to, but it means external reality would almost certainly be worse than the simulation, because the external reality contains the simulation, and therefore contains at least as much severe suffering as the simulation. put another way, is it worse for a torture chamber to exist, or for torture chamber makers to exist?

I think what this is highlighting is a disconnect in their thinking between what I presume to be a greatest good based utilitarian framework, and what they actually think is cool and are talking about, which is heroic individual action. they want to present as hari seldon but they're really wanking about pulp adventure shit

[–] sailor_sega_saturn@awful.systems 7 points 3 months ago* (last edited 3 months ago) (1 children)

What if the simulation was created specifically to have lower suffering/higher utils than in reality?

Wait so maybe the universe is just a cosmic gated community? No wonder there are so many racists.

[–] V0ldek@awful.systems 4 points 3 months ago* (last edited 3 months ago)

cosmic gated community

If I am actually an insufferble rich asshole that willingly put myself into the simulation because I was afraid Those People were coming for my precious bodily fluids then I insist I am let out of this so I can dutifully shoot myself in my stupid real face.

[–] o7___o7@awful.systems 6 points 3 months ago* (last edited 3 months ago)

Who wouldn't want to live in the hellworld where there's no sun, no showers, and the only food is creamed corn? Did they even watch The Matrix?

[–] NigelFrobisher@aussie.zone 4 points 3 months ago (1 children)

Remember that time Elon Musk said we were definitely living in a simulation, and everyone took him seriously because he’d somehow tricked people into thinking he was a science wizard rather than a skinny-fat nepo-baby with brainworms.

[–] V0ldek@awful.systems 3 points 3 months ago

Ye, it's in the paper lulz

Ignoring pseudoscientific interest in a topic, we can observe that in addition to several noted thinkers who have explicitly shared their probability of belief with regards to living in a simulation (ex. Elon Musk >99.9999999% [54]

[–] skillissuer@discuss.tchncs.de 4 points 3 months ago* (last edited 3 months ago) (2 children)

just gonna drop this: there's "SoS Research Collective" linked there. out of 21 people, 9 has their substack linked, 7 has blog or personal page (there's some overlap). one has linked their crypto twitter account and one linked EA account

[–] V0ldek@awful.systems 5 points 3 months ago

Damn, I thought it was some arxiv-style place where you could upload anything and get a DOI, but this is a concentrated effort to make very specific brain farts look semi-legitimate.

[–] skillissuer@discuss.tchncs.de 3 points 3 months ago

the "journal" itself has a substack too (they put articles there)

The discussion of creating a simulation as an escape room just makes me imagine that we're massively overcomplicating all this. Maybe we're wasting all this time trying to create a super intelligence to bash down the walls when really Gary just left one of the soup cans in the chest in the corner and we missed a clue somewhere.

[–] gerikson@awful.systems 3 points 3 months ago

Quality sneer. Thank you for your effort.