this post was submitted on 31 Jul 2024
144 points (100.0% liked)

TechTakes

42 readers
19 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] skillissuer@discuss.tchncs.de 22 points 3 months ago (1 children)

that diffusion of responsibility is a thing that already happened with crypto too

no officer, it's not a ponzi because it's a Distributed Future of Finance™, go pound sand, do you hate progress?

[–] anachronist@midwest.social 13 points 3 months ago* (last edited 3 months ago)

I remember seeing some crytpo bro smugly explain that his obviously illegal business model was fine because "It's a DAO, I'm just a community member."

[–] kbal@fedia.io 18 points 3 months ago

Based on your record of shitposting, our AI model predicts that your final wish is that your entire estate be left to ... Marc Andreessen? Is that correct? If so, blink as if in surprise.

[–] AceFuzzLord@lemm.ee 11 points 3 months ago* (last edited 3 months ago)

Can't wait for the profit above care tier hospitals to have their own AI that allows patients in a vegetative state to "freely" tell those same hospitals that they need to remain alive on whatever system is keeping them alive for as long as possible, making sure their family incurs the max amount of debt/bills possible. I'd think most middle aged or older family members would absolutely believe the AI is actually connected to their brain and is telling them it's what they want since they seem to be a lot more gullible about anything AI generated being real, if fakebook is to be believed.

[–] CarbonIceDragon@pawb.social 5 points 3 months ago (5 children)

I mean, while this idea is obviously a stupid one, I have seen some suggestion that an AI could be used to help interperet the brain activity of patients that are capable of thought but not communication, and thus help them communicate with doctors, rather than try to figure out what they might have said from prior history.

[–] dgerard@awful.systems 21 points 3 months ago

"could" is a word meaning "doesn't"

[–] pyrex@awful.systems 13 points 3 months ago* (last edited 3 months ago) (1 children)

I do not recommend using the word "AI" as if it refers to a single thing that encompasses all possible systems incorporating AI techniques. LLM guys don't distinguish between things that could actually be built and "throwing an LLM at the problem" -- you're treating their lack-of-differentiation as valid and feeding them hype.

[–] CarbonIceDragon@pawb.social 2 points 3 months ago (1 children)

I use a term I've seen used before, I'm not familiar enough with the details of the tech to know what what more technical term applies to this kind of device, but not to other types, and especially not what term will be generally recognized as referring to such. The hype guys are going to hype themselves up regardless in any case, seeing as that type tend to exist in an echo chamber as far as I can see.

[–] dgerard@awful.systems 4 points 3 months ago

maybe with blockchain,

[–] V0ldek@awful.systems 7 points 3 months ago

🦀 THEY DID NEUROIMAGING ON A DEAD SALMON 🦀

load more comments (2 replies)