this post was submitted on 16 Sep 2024
19 points (100.0% liked)

TechTakes

42 readers
15 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

top 50 comments
sorted by: hot top controversial new old
[–] dgerard@awful.systems 23 points 2 months ago (2 children)

Timnit Gebru on Twitter:

We received feedback from a grant application that included "While your impact metrics & thoughtful approach to addressing systemic issues in AI are impressive, some reviewers noted the inherent risks of navigating this space without alignment with larger corporate players,"

https://xcancel.com/timnitGebru/status/1836492467287507243

[–] swlabr@awful.systems 13 points 2 months ago

navigating this space without alignment with larger corporate players

stares into middle distance, hollow laugh

[–] mirrorwitch@awful.systems 11 points 2 months ago

No need for xcancel, Gebru is on actually social media: https://dair-community.social/@timnitGebru/113160285088058319

[–] hrrrngh@awful.systems 22 points 2 months ago (4 children)

This quote flashbanged me a little

When you describe your symptoms to a doctor, and that doctor needs to form a diagnosis on what disease or ailment that is, that's a next word prediction task. When choosing appropriate treatment options for said ailment, that's also a next word prediction task.

From this thread: https://www.reddit.com/r/gamedev/comments/1fkn0aw/chatgpt_is_still_very_far_away_from_making_a/lnx8k9l/

[–] swlabr@awful.systems 14 points 2 months ago

None of these fucking goblins have learned that analogies aren’t equivalences!!! They break down!!! Auuuuuuugggggaaaaaaarghhhh!!!!!!

[–] Soyweiser@awful.systems 14 points 2 months ago

Instead of improving LLMs, they are working backwards to prove that all other things are actually word prediction tasks. It is so annoying and also quite dumb. No chemisty isn't like coding/legos. The law isn't invalid because it doesn't have gold fringes and you use magical words.

[–] YourNetworkIsHaunted@awful.systems 12 points 2 months ago

The problem is that there could be any number of possible next words, and the available results suggest that the appropriate context isn't covered in the statistical relationships between prior words for anything but the most trivial of tasks i.e. automating the writing and parsing of emails that nobody ever wanted to read in the first place.

load more comments (1 replies)
[–] sailor_sega_saturn@awful.systems 22 points 2 months ago* (last edited 2 months ago) (3 children)

Today in you can't make this stuff up: SpaceX invades Cards Against Humanity's crowdfunded southern border plot of land.

Article (Ars Technica) Lawsuit with pictures (PDF)

Reddit Comment with CAH's email to backers

The above Ars Technica article also lead me to this broader article (reuters) about SpaceX's operations in Texas. I found these two sentences particularly unpleasant:

County commissioners have sought to rechristen Boca Chica, the coastal village where Johnson remains a rare holdout, with the Musk-endorsed name of Starbase.

At some point, former SpaceX employees and locals told Reuters, Starbase workers took down a Boca Chica sign identifying their village. They said workers also removed a statue of the Virgin of Guadalupe, an icon revered by the predominantly Mexican-American residents who long lived in the area.

Reading all of this also somehow makes Elon Musk's anti-immigrant tweets feel even worse to me than they already were.

load more comments (3 replies)
[–] self@awful.systems 21 points 2 months ago (7 children)

so mozilla decided to take the piss while begging for $10 donations:

We know $10 USD may not seem like enough to reclaim the internet and take on irresponsible tech companies. But the truth is that as you read this email, hundreds of Mozilla supporters worldwide are making donations. And when each one of us contributes what we can, all those donations add up fast.

With the rise of AI and continued threats to online privacy, the stakes of our movement have never been higher. And supporters like you are the reason why Mozilla is in a strong position to take on these challenges and transform the future of the internet.

the rise of AI you say! wow that sounds awful, it’s so good Mozilla isn’t very recently notorious for pushing that exact thing on their users without their consent alongside other privacy-violating changes. what a responsible tech company!

[–] dgerard@awful.systems 18 points 2 months ago (8 children)

Paul Krugman and Francis Fukuyama and Daniel Dennett and Steve Pinker were in a "human biodiversity discussion group" with Steve Sailer and Ron Unz in 1999, because of course they were

[–] Soyweiser@awful.systems 11 points 2 months ago (1 children)

I look forward to the 'but we often disagreed' non-apologies. With absolute lack of self reflection on how this helped push Sailer/Unz into the positions they are now. If we even get that.

[–] swlabr@awful.systems 14 points 2 months ago

Pinker: looking through my photo album where I’m with people like Krauss and Epstein, shaking my head the whole time so the people on the bus know I disagree with them

load more comments (7 replies)
[–] mii@awful.systems 17 points 2 months ago (4 children)

Follow up for this post from the other day.

Our DSO now greenlit the stupid Copilot integration because "Microsoft said it's okay" (of course they did), and he also was on some stupid AI convention yesterday and whatever fucking happened there, he's become a complete AI bro and is now preaching the Gospel of Altman that everyone who's not using AI will be obsolete in few years and we need to ADAPT OR DIE. It's the exact same shit CEO is spewing.

He wants an AI that handles data security breaches by itself. He also now writes emails with ChatGPT even though just a week ago he was hating on people who did that. I sat with my fucking mouth open in that meeting and people asked me whether I'm okay (I'm not).

I need to get another job ASAP or I will go clinically insane.

[–] Soyweiser@awful.systems 11 points 2 months ago* (last edited 2 months ago)

He wants an AI that handles data security breaches by itself. He also now writes emails with ChatGPT

He is the data security breach.

E: Dropped a T. But hey, at least chatgpt uses SSL to communicate, so the databreach is now constrained to the ChatGPT trainingdata. So it isn't that bad.

[–] self@awful.systems 11 points 2 months ago (2 children)

I’m so sorry. the tech industry is shockingly good at finding people who are susceptible to conversion like your CEO and DSO and subjecting them to intense propaganda that unfortunately tends to work. for someone lower in the company like your DSO, that’s a conference where they’ll be subjected to induction techniques cribbed from cults and MLM schemes. I don’t know what they do to the executives — I imagine it involves a variety of expensive favors, high levels of intoxication, and a variant of the same techniques yud used — but it works instantly and produces someone who can’t be convinced they’ve been fed a lie until it ends up indisputably losing them a ton of money

load more comments (2 replies)
load more comments (2 replies)
[–] khalid_salad@awful.systems 17 points 2 months ago (1 children)

Every few years there is some new CS fad that people try to trick me into doing research in


"algorithms" (my actual area), then quantum, then blockchain, then AI.

Wish this bubble would just fucking pop already.

[–] ibt3321@lemmy.blahaj.zone 12 points 2 months ago

This stuff feels like a DJ is cross-fading between the different hype cycles.

[–] o7___o7@awful.systems 15 points 2 months ago (4 children)

Behind the Bastards is starting a series about Yarvin today. Always appreciate it when they wander into our bailiwick!

load more comments (4 replies)
[–] ibt3321@lemmy.blahaj.zone 14 points 2 months ago (4 children)

A lemmy-specific coiner today: https://awful.systems/post/2417754

The dilema of charging the users and a solution by integrating blockchain to fediverse

First, there will be a blockchain. There will be these cryptocurrencies:

This guy is speaking like he is in Genesis 1

I guess it would be better that only the instances can own instance-specific coins.

You guess alright? You mean that you have no idea what you're saying.

if a user on lemmy.ee want to post on lemmy.world, then lemmy.ee have to pay 10 lemmy.world coin to lemmy.world

What will this solve? If 2 people respond to each other's comments, the instance with the most valuable coin will win. What does that have to do with who caused the interaction?

[–] sailor_sega_saturn@awful.systems 18 points 2 months ago (1 children)

Yes crypto instances, please all implement this and "disallow" everyone else from interacting with you! I promise we'll be sad and not secretly happy and that you'll make lots of money from people wanting to interact with you.

load more comments (1 replies)
load more comments (3 replies)
[–] flizzo@awful.systems 14 points 2 months ago (6 children)

Orange site on pager bombs in Lebanon:

If we try to do what we are best at here at HN, let’s focus the discussion on the technical aspects of it.

It immediately reminded me of Stuxnet, which also from a technical perspective was quite interesting.

[–] skillissuer@discuss.tchncs.de 11 points 2 months ago

technical aspect seems to be for now that israeli secret services intercepted and sabotaged thousands of pagers to be distributed for hezbollah operatives, then blew them up all at once. it does look like small, reportedly less than 20g each explosive charge, but orange site accepted truth is that it was haxxorz blowing up lithium batteries. israelis already did exactly this thing but with phone in targeted assassination, and actual volume of such bomb would be tiny (about 10ml)

load more comments (5 replies)
[–] gerikson@awful.systems 13 points 2 months ago (6 children)

Despite Soatak explicitely warning users that posting his latest rant[1] to the more popular tech aggregators would lead to loss of karma and/or public ridicule, someone did just that on lobsters and provoked this mask-slippage[2]. (comment is in three paras, which I will subcomment on below)

Obligatory note that, speaking as a rationalist-tribe member, to a first approximation nobody in the community is actually interested in the Basilisk and hasn’t been for at least a decade. As far as I can tell, it’s a meme that is exclusively kept alive by our detractors.

This is the Rationalist version of the village worthy complaining that everyone keeps bringing up that one time he fucked a goat.

Also, “this sure looks like a religion to me” can be - and is - argued about any human social activity. I’m quite happy to see rationality in the company of, say, feminism and climate change.

Sure, "religion" is on a sliding scale, but Big Yud-flavored Rationality ticks more of the boxes on the "Religion or not" checklist than feminism or climate change. In fact, treating the latter as a religion is often a way to denigrate them, and never used in good faith.

Finally, of course, it is very much not just rationalists who believe that AI represents an existential risk. We just got there twenty years early.

Citation very much needed, bub.


[1] https://soatok.blog/2024/09/18/the-continued-trajectory-of-idiocy-in-the-tech-industry/

[2] link and username witheld to protect the guilty. Suffice to say that They Are On My List.

load more comments (6 replies)
[–] fasterandworse@awful.systems 13 points 2 months ago* (last edited 2 months ago) (2 children)

Just discovered Patrick Boyle's channel. Deadpan sneer perfection https://www.youtube.com/watch?v=3jhTnk3TCtc

edit: tried to post invidious link but didn't seem to work

load more comments (2 replies)
[–] BlueMonday1984@awful.systems 13 points 2 months ago (8 children)

Pulling out a pretty solid Tweet @ai_shame showed me:

countersneer

To pull out a point I've been hammering since Baldur Bjarnason talked about AI's public image, I fully anticipate tech's reputation cratering once the AI bubble bursts. Precisely how the public will view the tech industry at large in the aftermath I don't know, but I'd put good money on them being broadly hostile to it.

load more comments (8 replies)
[–] sailor_sega_saturn@awful.systems 13 points 2 months ago (3 children)

Meanwhile, over at the orange site they discuss a browser hack: https://news.ycombinator.com/item?id=41597250 As in a hack that gave the attacker control over any user of this particular browser even if they only ever visited innocent websites, only needing to know their user ID.

This is what's known in the biz as a company destroying level fuck-up. I'm not sure this is particularly sneerable or not but I'm just agog at how a company that calls themselves "The Browser Company" can get the basic browser security model so incredibly wrong.

[–] self@awful.systems 12 points 2 months ago* (last edited 2 months ago) (3 children)

from their Wikipedia page I’m starting to get why I’ve never previously heard of The Browser Company’s browser; it’s about a year old, it’s only for macOS, iOS, and Windows, and it’s just a chromium fork with a Swift UI overtop and extremely boring features you can get with plugins on Firefox without risking getting your entire life compromised (til Mozilla decides that’s profitable, I suppose)

Arc is designed to be an "operating system for the web", and integrates standard browsing with Arc's own applications through the use of a sidebar. The browser is designed to be customisable and allows users to cosmetically change how they see specific websites.

oh fuck off. so what makes something an operating system is:

  • the whole UI got condensed down into an awkward-looking sidebar that takes up more space instead of a top bar
  • you can re-style websites (which is the feature that enabled this hack, and which must be one of the most common browser plugins)
  • you can change the browser’s UI color
  • it can run “its own applications”? which sounds like a real security treat if they’re running in the UI context of the browser. though to be honest I don’t see why these wouldn’t just be ordinary web apps, in which case it’s just a PWA feature
load more comments (3 replies)
load more comments (2 replies)
[–] gerikson@awful.systems 13 points 2 months ago (1 children)
[–] dgerard@awful.systems 11 points 2 months ago* (last edited 2 months ago)

so according to @liveuamap, the backstory here is that this is to get his name out of news about the WildBerries shooting in Moscow - where a battle for corporate control came down to gunshots - because he was backing one of the sides

[–] antifuchs@awful.systems 12 points 2 months ago (4 children)

Let’s bring the haunted nuclear reactor back online so copilot can hallucinate a little more https://www.washingtonpost.com/business/2024/09/20/microsoft-three-mile-island-nuclear-constellation/

[–] mii@awful.systems 16 points 2 months ago (1 children)

[…] the tech giant would buy 100 percent of its power for 20 years.

I want them to fucking choke on this deal when the bubble bursts.

[–] antifuchs@awful.systems 11 points 2 months ago

I live like 15mi from there, I would prefer the containment bubble to stay intact. But the tech bubble is welcome to go blow up any moment

[–] sailor_sega_saturn@awful.systems 15 points 2 months ago* (last edited 2 months ago) (1 children)

How the heck have people become so... blasé about climate change?? It is wild to me. If we're restarting nuclear reactors, with everything that entails, it should be with the goal of shutting down gas or coal power. Not to do more unsustainable garbage on top of all the existing unsustainable garbage.

Feels like the world's just given up sometimes, even though it's not quite too late.

load more comments (1 replies)
[–] gerikson@awful.systems 12 points 2 months ago

"to give you more AI slop we have to restart TMI" is going to do wonders for the public's opinion of Big Tech

load more comments (1 replies)
[–] dgerard@awful.systems 12 points 2 months ago (8 children)

fuckin. when did Mozilla's twitter feed turn into wall to fucking wall AI spam https://x.com/mozilla

load more comments (8 replies)
[–] mountainriver@awful.systems 12 points 2 months ago (1 children)

Jason Kint writes a thread on how Google spun - and publications printed their spin - on a recently lost case: https://xcancel.com/jason_kint/status/1836781623137681746

If you already are very cynical about tech journalism (or the state of journalism in general), it might be nothing new except confirmation from the internal documents of Google. But always nice to see how the sausages are made.

load more comments (1 replies)
[–] sailor_sega_saturn@awful.systems 12 points 2 months ago* (last edited 2 months ago) (4 children)

The robots clearly want us dead -- "Delivery Robot Knocked Over Pedestrian, Company Offered ‘Promo Codes’ to Apologize" (404 media) (archive)

And here rationalists warned that AI misalignment would be hidden from us until the "diamonoid bacteria".

[–] BigMuffin69@awful.systems 11 points 2 months ago (1 children)

I literally just saw a xitter post about how the exploding pagers in Lebanon is actually a microcosm of how a 'smarter' entity (the yahood) can attack a 'dumber' entity, much like how AGI will unleash the diamond bacterium to simultaneously kill all of humanity.

Which again, both entities are humans- they have the same intelligence you twats. Same argument people make all the time w.r.t. Spanish v Aztecs where gunpowder somehow made Cortez and company gigabrains compared to the lowly indigenous people (and totally ignoring the contributions of the real super intelligent entity: the small pox virus).

[–] sailor_sega_saturn@awful.systems 12 points 2 months ago

OK new rule you're only allowed to call someone dumb for not finding explosives in their pagers if you had, previously to hearing the news, regularly checked with no specialized tools all electronics you buy for bombs hidden inside of the battery compartment.

load more comments (3 replies)
[–] sinedpick@awful.systems 12 points 2 months ago* (last edited 2 months ago) (4 children)

I signed up for the Urbit newsletter many moons ago when I was a little internet child. Now, it's a pretty decent source of sneers. This month's contains: "The First Wartime Address with Curtis Yarvin". In classic Moldbug fashion, it's Two Hours and Forty Fucking Five minutes long. I'm not going to watch the whole thing, but I'll try to mine the transcript for sneers.

26:23 --

Simplicity in them you know it runs on a virtual machine who specification Nock [which] fits on a T-shirt and uh you know the goal of the system is to basically take this kind of fundamental mathematical simplicity of Nock and maintain that simplicity all the way to user space so we create something that's simple and easy to use that's not a small amount of of work

Holy fucking shit, does this guy really think building your entire software stack on brainfuck makes even a little bit of sense at all?

30:17 -- a diatribe about how social media can only get worse and how Facebook was better than myspace because its original users were at the top of the social hierarchy. Obviously, this bodes well for urbit because all of you spending 3 hours of your valuable time listening to this wartime address? You're the cream of the crop.

~2:00:00 -- here he addresses concerns about his political leanings, caricaturing the concern as "oh Yarvin wants to make this a monarchy" and responding by saying "nuh uh, urbit is decentralized." Absent from all this is any meaningful analysis of how decentralized systems (such as the internet itself) eventually tend to centralized systems under certain incentive structures. Completely devoid of substance.

load more comments (4 replies)
[–] sailor_sega_saturn@awful.systems 12 points 2 months ago (1 children)

I've been slightly unhappy at my job lately as it's been getting less cool and more bureaucratic and stressful over time; so I've been idly browsing job postings. But so many of them are about AI it's kinda discouraging.

Take Microsoft for example, a big company that surely does lots of interesting stuff. They currently have 17 job postings for experienced programmers in California. 12 of them mention AI in the description. That's 70%. And the only cool position asks for a bazillion years of kernel experience (almost tempted to go for that anyway though).

Ugh guess it's maybe not the best time to switch jobs. ~~Really I should just go self employed what could possibly go wrong?~~

load more comments (1 replies)
[–] gerikson@awful.systems 11 points 2 months ago (5 children)
[–] self@awful.systems 12 points 2 months ago (3 children)

Sometimes you read an article and you think "this article doesn't want me to do X, but all its arguments against X are utterly terrible. If that's the best they could find, X is probably alright."

that thread is an unholy combination of two of my least favorite types of guys: techbros willfully misunderstanding research they disagree with, and homeopaths

[–] self@awful.systems 13 points 2 months ago

this article doesn’t want me to drink a shitload of colloidal silver, but all its arguments against drinking colloidal silver (it doesn’t do anything for your health, it might turn you blue, it tastes like ass) are utterly terrible. If that’s the best they could find, drinking a shitload of colloidal silver is probably alright.

load more comments (2 replies)
load more comments (4 replies)
load more comments
view more: next ›