TerribleMachines

joined 1 year ago

Best not to for exactly that reason but I know I wasn't the only one who experienced it by any means!

[–] TerribleMachines@awful.systems 6 points 1 year ago (2 children)

😅 honestly I don't know what else to say, the memory haunts me to this day. I think it was the point when I started going "huh, the rats make weirdly dumb mistakes considering they've made posts exactly about these kinds of error" to "wait, there's something really sinister going on here"

Truer words were never spoken, probably.

CFAR is the mind killer (because they kill you and replace you with a Yud clone).

[–] TerribleMachines@awful.systems 8 points 1 year ago* (last edited 1 year ago) (2 children)

Only half joking: there was this one fanfic you see...

Mainly I don't think there was any one inciting incident beyond its creation: Yud was a one man cult way before LW, and the sequences actively pushed all the cultish elements required to lose touch with reality. (Fortunately, my dyslexic ass only got as far as the earlier bits he mostly stole from other people rather than the really crazy stuff.)

There was definitely a step-change around the time CFAR was created, that was basically a recruitment mechanism for the cult and part of the reason I got anywhere physically near those rubes myself. An organisation made to help people be more rational seemed like a great idea—except it literally became EY/MIRI's personal sockpuppet. They would get people together in these fancy ass mansions for their workshops and then tell them nothing other than AI research mattered. I think it was 2014/15 when they decided internally that CFAR's mission was to create more people like Yudkowsky. I don't think its a coincidence that most of the really crazy cult stuff I've heard about happened after then.

Not that bad stuff didn't happen before either.^___^

[–] TerribleMachines@awful.systems 7 points 1 year ago (6 children)

Good point with the line! Some of the best liars are good at pretending to themselves they believe something.

I don't think its widely known, but it is known, (old sneeeclub posts about it somwhere) that he used to feed the people he was dating LSD and try to convince them they "depended" on him.

First time I met him, in a professional setting, he had his (at the time) wife kneeling at his feet wearing a collar.

Do I have hard proof he's a criminal? Probably not, at least not without digging. Do I think he is? Almost certainly.

[–] TerribleMachines@awful.systems 6 points 1 year ago* (last edited 1 year ago)

Yeah, this ~~post~~ (edit: "comment", the original post does not spark joy) sparked joy for me too (my personal cult lingo is from Marie Kondo books, whatcha gonna do)

One of my takes is that the "AI alignment" garbage is way less of a problem than "Human Alignment" i.e. how to get humans to work together and stop being jerks all the time. Absolutely wild that they can't see that, except perhaps when it comes to trying to get other humans to give them money for the AIpocalype.

Preach, as someone inside academia, the bullcrap is real. I very rarely read a paper that hasn't got a major stats issue—an academic paper is only worth something if you understand it enough to know how wrong it is or there's plenty of replication/related work building on it, ideally both. (And it's a technical field with an objective measure of truth but don't let my colleagues in humanities hear me say that—its not that their work is worthless, its just its not reliable.)

It's true, I'm terrible for it myself 😅

[–] TerribleMachines@awful.systems 5 points 1 year ago (8 children)

My perspective is a little different (from having met him), I think he genuinely believed a lot of what he said at one point at least ... but you're pretty much spot on in all the ways that matter, he's a really bad person of the should probably be in jail for crimes kind.

[–] TerribleMachines@awful.systems 7 points 1 year ago (2 children)

As you were being pedantic, allow me to be pedantic in return.

Admittedly, you might know something I don't, but I would describe Andrew Ng as an academic. These kinds of industry partnerships, like the one in that article you referred to, are really, really common in academia. In fact, it's how a lot of our research gets done. We can't do research if we don't have funding, and so a big part of being an academic is persuading companies to work with you.

Sometimes companies really, really want to work with you, and sometimes you've got to provide them with a decent value proposition. This isn't just AI research either, but very common in statistics, as well as biological sciences, physics, chemistry, well, you get the idea. Not quite the same situation in humanities, but eh, I'm in STEM.

Now, in terms of universities having the hardware, certainly these days there is no way a university will have even close to the same compute power that a large company like Google has access to. Though, "even back in" 2012, (and well before) universities had supercomputers. It was pretty common to have a resident supercomputer that you'd use. For me, and my background's orginally in physics, back then we had a supercomputer in our department, the only one at the university, and people from other departments would occasionally ask to run stuff on it. A simpler time.

It's less that universities don't have access to that compute power. It's more that they just don't run server farms. So we pay for it from Google or Amazon and so on, like everyone in the corporate world---except of course the companies that run those servers (they still have to pay costs and lost revenue). Sometimes that's subsidized by working with a big tech company, but it isn't always.

I'm not even going to get into the history of AI/ML algorithms and the role of academic contributions there, and I don't claim that the industry played no role; but the narrative that all these advancements are corporate just ain't true, compute power or no. We just don't shout so loud or build as many "products."

Yeah, you're absolutely right that MIRI didn't try any meaningful computation experiments that I've seen. As far as I can tell, their research record is... well, staring at ceilings and thinking up vacuous problems. I actually once (when I flirted with the cult) went to a seminar that the big Yud himself delivered, and he spent the whole time talking about qualia, and then when someone asked him if he could describe a research project he was actively working on, he refused to, on the basis that it was "too important to share."

"Too important to share"! I've honestly never met an academic who doesn't want to talk about their work. Big Yud is a big let down.

 

The words you are reading have not been produced by Generative AI. They're entirely my own.

The role of Generative AI

The only parts of what you're reading that Generative AI has played a role in are the punctuation and the paragraphs, as well as the headings.

Challenges for an academic

I have to write a lot for my job; I'm an academic, and I've been trying to find a way to make ChatGPT be useful for my work. Unfortunately, it's not really been useful at all. It's useless as a way to find references, except for the most common things, which I could just Google anyway. It's really bad within my field and just generates hallucinations about every topic I ask it about.

The limited utility in writing

The generative features are useful for creative applications, like playing Dungeons and Dragons, where accuracy isn't important. But when I'm writing a formal email to my boss or a student, the last thing I want is ChatGPT's pretty awful style, leading to all sorts of social awkwardness. So, I had more or less consigned ChatGPT to a dusty shelf of my digital life.

A glimmer of potential

However, it's a new technology, and I figured there must be something useful about it. Certainly, people have found it useful for summarising articles, and it isn't too bad for it. But for writing, that's not very useful. Summarising what you've already written after you've written it, while marginally helpful, doesn't actually help with the writing part.

The discovery of WhisperAI

However, I was messing around with the mobile application and noticed that it has a speech-to-text feature. It's not well signposted, and this feature isn't available on the web application at all, but it's not actually using your phone's built-in speech-to-text. Instead, it uses OpenAI's own speech-to-text called WhisperAI.

Harnessing the power of WhisperAI

WhisperAI can be broadly thought of as ChatGPT for speech-to-text. It's pretty good and can cope with people speaking quickly, as well as handling large pauses and awkwardness. I've used it to write this article, and this article isn't exactly short, and it only took me a few minutes.

The technique and its limitations

Now, the way you use this technique is pretty straightforward. You say to ChatGPT, "Hey, I'd like you to split the following text into paragraphs and don't change the content." It's really important you say that second part because otherwise, ChatGPT starts hallucinating about what you said, and it can become a bit of a problem. This is also an issue if you try putting in too much at once. I found I can get to about 10 minutes before ChatGPT either cuts off my content or starts hallucinating about what I actually said.

The efficiency of the method

But that's fine. Speaking for about 10 minutes straight about a topic is still around 1,200 words if you speak at 120 words per minute, as is relatively common. And this is much faster than writing by hand is. Typing, the average typing speed is about 40 words per minute. Usually, up to around 100 words per minute is not the strict upper limit but where you start getting diminishing returns with practice.

The reality of writing speed

However, I think we all know that writing, it's just not possible to write at 100 words per minute. It's much more common for us to write at speeds more like 20 words per minute. For myself, it's generally 14, or even less if it's a piece of serious technical work.

Unrivaled first draft generation

Admittedly, using ChatGPT as fancy dictation isn't really going to solve the problem of composing very exact sentences. However, as a way to generate a first draft, I think it's completely unrivaled. You can talk through what you want to write, outline the details, say some phrases that can act as placeholders for figures or equations, and there you go.

Revolutionizing the writing process

You have your first draft ready, and it makes it viable to actually do a draft of a really long report in under an hour, and then spend the rest of your time tightening up each of the sections with the bulk of the words already written for you and the structure already there. Admittedly, your mileage may vary.

A personal advantage

I do a lot of teaching and a lot of talking in my job, and I find that a lot easier. I'm also neurodivergent, so having a really short format helps, and being able to speak really helps me with my writing.

Seeking feedback

I'm really curious to see what people think of this article. I've endeavored not to edit it at all, so this is just the first draft of how it came out of my mouth. I really want to know how readable you think this is. Obviously, there might be some inaccuracies; please feel free to point them out where there are strange words. I'd love to hear if anyone is interested in trying this out for their work. I've only been messing around with this for a week, but honestly, it's been a game changer. I've suddenly looked to my colleagues like I'm some kind of super prolific writer, which isn't quite the case. Thanks for reading, and I'll look forward to hearing your thoughts.

(Edit after dictation/processing: the above is 898 words and took about 8min 30s to dictate ~105WPM.)

Love this!

Alas, if Yud took an actual physics class, he wouldn't be able to use it as the poorly defined magic system for his OC doughnut-steal IRL bayesian superintelligence fanfic.

[–] TerribleMachines@awful.systems 17 points 1 year ago (15 children)

My worry in 2021 was simply that the TESCREAL bundle of ideologies itself contains all the ingredients needed to “justify,” in the eyes of true believers, extreme measures to “protect” and “preserve” what Bostrom’s colleague, Toby Ord, describes as our “vast and glorious” future among the heavens.

Golly gee, those sure are all the ingredients for white supremacy these folk are playing around with what, good job there are no signs of racism... right, right?!?!

In other news, I find it wild that big Yud has gone on an arc from "I will build an AI to save everyone" to "let's do a domestic terrorism against AI researchers." He should be careful, someone might this this is displaced rage at his own failure to make any kind of intellectual progress while academic AI researchers have passed him by.

(Idk if anyone remembers how salty he was when AlphaGo showed up and crapped all over his "symbolic AI is the only way" mantra, but it's pretty funny to me that the very group of people he used to say were incompetent are a "threat" to him now they're successful. Schoolyard bully stuff and wotnot.)

view more: next ›