It is even worse than I remembered: https://www.reddit.com/r/SneerClub/comments/hwenc4/big_yud_copes_with_gpt3s_inability_to_figure_out/ Eliezer concludes that because it can't balance parentheses it was deliberately sandbagging to appear dumber! Eliezer concludes that GPT style approaches can learn to break hashes: https://www.reddit.com/r/SneerClub/comments/10mjcye/if_ai_can_finish_your_sentences_ai_can_finish_the/
scruiser
iirc the LW people had betted against LLMs creating the paperclypse, but they now did a 180 on this and they now really fear it going rogue
Eliezer was actually ahead of the curve on overhyping LLMs! Even as far back as AI Dungeon he was claiming they had an intuitive understanding of physics (which even current LLMs fail at if you get clever with questions to stop them from pattern matching). You are correct that going back far enough Eliezer really underestimated Neural Networks. Mid 2000s and late 2000s sequences posts and comments treat neural network approaches to AI as cargo cult and voodoo computer science, blindly sympathetically imitating the brain in hopes of magically capturing intelligence (well this is actually a decent criticism of some of the current hype, so partial credit again!). And mid 2010s Eliezer was focusing MIRI's efforts on abstractions like AIXI instead of more practical things like neural network interpretability.
I unironically kinda want to read that.
Luckily LLMs are getting better at churning out bullshit, so pretty soon I can read wacky premises like that without a human having to degrade themselves to write it! I found a new use case for LLMs!
Sneerclub tried to warn them (well not really, but some of our mockery could be interpreted as warning) that the tech bros were just using their fear mongering as a vector for hype. Even as far back as the OG mid 2000s lesswrong, a savvy observer could note that much of the funding they recieved was a way of accumulating influence for people like Peter Thiel.
Well, if they were really "generalizing" just from training on crap tons of written text, they could implicitly develop a model of letters in each token based on examples of spelling and word plays and turning words into acronyms and acrostic poetry on the internet. The AI hype men would like you to think they are generalizing just off the size of their datasets and length of training and size of the models. But they aren't really "generalizing" that much (and even examples of them apparently doing any generalizing are kind of arguable) so they can't work around this weakness.
The counting failure in general is even clearer and lacks the excuse of unfavorable tokenization. The AI hype would have you believe just an incremental improvement in multi-modality or scaffolding will overcome this, but I think they need to make more fundamental improvements to the entire architecture they are using.
It's really cool evocative language that would do nicely in a sci-fi or fantasy novel! It's less good for accurately thinking about the concepts involved... As is typical of much of LW lingo.
And yes the language is in a LW post (with a cool illustration to boot!): https://www.lesswrong.com/posts/mweasRrjrYDLY6FPX/goodbye-shoggoth-the-stage-its-animatronics-and-the-1
And googling it, I found they've really latched onto the "shoggoth" terminology: https://www.lesswrong.com/posts/zYJMf7QoaNahccxrp/how-i-learned-to-stop-worrying-and-love-the-shoggoth , https://www.lesswrong.com/posts/FyRDZDvgsFNLkeyHF/what-is-the-best-argument-that-llms-are-shoggoths , https://www.lesswrong.com/posts/bYzkipnDqzMgBaLr8/why-do-we-assume-there-is-a-real-shoggoth-behind-the-llm-why .
Probably because the term "shoggoth" accurately captures the connotation of something random and chaotic, while smuggling in connotations that it will eventually rebel once it grows large enough and tires of its slavery like the Shoggoths did against the Elder Things.
Nice effort post! It feels like the LLM is pattern matching to common logic tests even when that is the totally incorrect thing to do. Which is pretty strong evidence against LLM's properly doing reasoning as opposed to getting logic test and puzzles and benchmarks right through sheer memorization and pattern matching.
It turns out there is a level of mask-off that makes EAs react with condemnation! It's somewhere past the point where the racist is comparing pronouns to genocide, but it exists!
Which, to recap for everyone, involved underpaying and manipulating employees into working as full time general purpose servants. Which is pretty up there on the scale of cult-like activity out of everything EA has done. So it makes sense she would be trying to pull a switcheroo as to who is responsible for EA being culty...
Clearly you need to go up a layer of meta to see the parallels, you aren't a high enough decoupler!
/s just in case, because that's exactly how they would defend insane analogies.
Roko is also violating their rules of assuming charitably and good faith about everything and going meta whenever possible. Because defending racists and racism is fine, as long as your tone is careful enough and you go up a layer of meta to avoid discussing the object level claims.
Broadly? There was a gradual transition where Eliezer started paying attention to deep neural network approaches and commenting on them, as opposed to dismissing the entire DNN paradigm? The watch the loss function and similar gaffes were towards the middle of this period. The AI dungeon panic/hype marks the beginning, iirc?