this post was submitted on 18 Mar 2024
20 points (100.0% liked)

TechTakes

42 readers
15 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Feel like you want to sneer about something but you don't quite have a snappy post in you? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut'n'paste it into its own post, there’s no quota here and the bar really isn't that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

you are viewing a single comment's thread
view the rest of the comments
[–] TinyTimmyTokyo@awful.systems 9 points 8 months ago (4 children)

Anthropic's Claude confidently and incorrectly diagnoses brain cancer based on an MRI.

[–] swlabr@awful.systems 9 points 8 months ago (1 children)

A friend that wants you to have an aggressive brain tumour to make an AI look good is no friend at all

[–] froztbyte@awful.systems 8 points 8 months ago

There are far too many people in this world who learned both wrong things from the “Pray tell, Mr Babbage” anecdote

[–] Jayjader@jlai.lu 8 points 8 months ago

"But look how convincing [it] sounds!"

.... how did we get to the point where the ai bros are un-ironically telling us, as a selling point, that their shiny toy literally gives false yet convincing-sounding medical diagnoses ?!?!?!

If I were working on Claude and wanted to hype it up, I would not talk about this experiment online or in public. If I were working on Claude and wanted to be responsible towards "the public", I would use this example as a cautionary warning, not to further hype up the tool.

This feels like the slight period at the beginning of the NFT craze when I wasn't yet comfortable dismissing out of hand anyone excited about them, because surely there was a least some useful application that wasn't for scamming people, and surely this many people couldn't all be so deluded about the same idea.

[–] froztbyte@awful.systems 7 points 8 months ago

“Oh, some patient data. Let me quickly casually scan this into the sv datacorp. What’s that…privacy concerns? Naaaaah I changed the filename”

[–] dgerard@awful.systems 5 points 8 months ago

oh lol just made this a post too