this post was submitted on 04 Sep 2024
131 points (100.0% liked)

TechTakes

42 readers
15 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] swlabr@awful.systems 64 points 2 months ago

LLMs, and everyone who uses them to process information:

[–] hex@programming.dev 42 points 2 months ago

Facts are not a data type for LLMs

I kind of like this because it highlights the way LLMs operate kind of blind and drunk, they're just really good at predicting the next word.

[–] swlabr@awful.systems 39 points 2 months ago (1 children)

ATTN: If you're coming into this thread to say, "The output of AI is bad because your prompts suck," I'm just proud that you managed to figure out how to use the internet at all. Good job, you!

[–] froztbyte@awful.systems 15 points 2 months ago

remember remember, eternal september

(not that I much agree with the classist overtones of the original, but fuck me does it come to mind often)

[–] Sibbo@sopuli.xyz 23 points 2 months ago (1 children)

Well, to be fair, AI can do it in seconds. Which beats humans.

But if that is relevant if the results are worthless is another question.

[–] HubertManne@moist.catsweat.com 10 points 2 months ago (1 children)

Yeah it changes the task from note taking or summarizing to proofreading.

[–] YourNetworkIsHaunted@awful.systems 5 points 2 months ago (2 children)

And proofreading is notably more complex and has a worse failure state than just writing your own summary.

load more comments (2 replies)
[–] kbal@fedia.io 20 points 2 months ago

Made strange choices about what to highlight.

They certainly do. For a while it was common to see AI-generated summaries under links to articles on lemmy, so I got a feel for them. Seems to me you would not need any fancy artificial intelligence to do equally well: Just take random excerpts, or maybe just read every third sentence.

[–] dgerard@awful.systems 19 points 2 months ago (3 children)

how the hell did this of all the posts turn into a promptfondler shooting gallery

[–] froztbyte@awful.systems 10 points 2 months ago

1.26K subscribers

load more comments (2 replies)
[–] dgerard@awful.systems 17 points 2 months ago

i have seen the light from the helpful posters here, made up bullshit alleged summaries of documents are great actually

[–] khalid_salad@awful.systems 11 points 2 months ago

Could it be because a statistical relation isn't the same as a semantic one? No, I must be prompting it wrong. I'll just add "engineer" to my title and then everyone will take me seriously.

[–] RagnarokOnline@programming.dev 11 points 2 months ago (4 children)

I had GPT 3.5 break down 6x 45-minute verbatim interviews into bulleted summaries and it did great. I even asked it to anonymize people’s names and it did that too. I did re-read the summaries to make sure no duplicate info or hallucinations existed and it only needed a couple of corrections.

Beats manually summarizing that info myself.

Maybe their prompt sucks?

[–] froztbyte@awful.systems 32 points 2 months ago

“Are you sure you’re holding it correctly?”

christ, every damn time

[–] dgerard@awful.systems 25 points 2 months ago

I got AcausalRobotGPT to summarise your post and it said "I'm not saying it's always programming.dev, but"

[–] pikesley@mastodon.me.uk 22 points 2 months ago

@RagnarokOnline @dgerard "They failed to say the magic spells correctly"

[–] sxan@midwest.social 8 points 2 months ago

How did you make sure no hallucinations existed without reading the source material; and if you read the source material, what did using an LLM save you?

[–] GBU_28@lemm.ee 9 points 2 months ago (3 children)

Dang everyone here needs to look at a tree or a cat or something. Energy is wack in here

[–] dgerard@awful.systems 24 points 2 months ago (1 children)

I just went outside and appreciated the rendering

[–] GBU_28@lemm.ee 8 points 2 months ago (1 children)

Pretty nice right? I did the trees and cats.

[–] froztbyte@awful.systems 8 points 2 months ago (1 children)

DANGER WILL ROBINSON, godposting detected

[–] AcausalRobotGod@awful.systems 10 points 2 months ago (1 children)
[–] dgerard@awful.systems 8 points 2 months ago

if people don't appreciate the kitties their tamagotchi is in some fucking trouble

[–] V0ldek@awful.systems 9 points 2 months ago

While reading this entire stuff I periodically looked at my cat and let out a sigh, and he just looks at me with that knowing gaze

"Ye, you are all dumb, hoomans. Don't think about it. Pet me now."

load more comments (1 replies)
[–] beefbot@lemmy.blahaj.zone 8 points 2 months ago (1 children)

Is it only me, or is the linked article not super long on details & is reaching a conclusion from 2 examples? This is important & I need to hear more, & I’m generally biased against AI at this point— but the article isn’t doing enough to convince me

[–] self@awful.systems 12 points 2 months ago (1 children)

did you click through to any of the inline citations? David’s shorter articles on pivot mostly gather and summarize those, so if you need to read the original research and its conclusions that’s where to go

[–] beefbot@lemmy.blahaj.zone 9 points 2 months ago

Ah, that’s better, yes. Thank you , no sarcasm :) now sleepy brain is more informed

[–] lvxferre@mander.xyz 7 points 2 months ago* (last edited 2 months ago) (4 children)

You could use them to know what the text is about, and if it's worth your reading time. In this situation, it's fine if the AI makes shit up, as you aren't reading its output for the information itself anyway; and the distinction between summary and shortened version becomes moot.

However, here's the catch. If the text is long enough to warrant the question "should I spend my time reading this?", it should contain an introduction for that very purpose. In other words if the text is well-written you don't need this sort of "Gemini/ChatGPT, tell me what this text is about" on first place.

EDIT: I'm not addressing documents in this. My bad, I know. [In my defence I'm reading shit in a screen the size of an ant.]

[–] queermunist@lemmy.ml 20 points 2 months ago* (last edited 2 months ago) (13 children)

ChatGPT gives you a bad summary full of hallucinations and, as a result, you choose not to read the text based on that summary.

load more comments (13 replies)
[–] dgerard@awful.systems 19 points 2 months ago

Both the use cases here are goverment documents. I'm baffled at the idea of it being "fine if the AI makes shit up".

[–] V0ldek@awful.systems 6 points 2 months ago

if the text is well-written you don’t need this sort of “Gemini/ChatGPT, tell me what this text is about” on first place.

And if it's badly written then the LLM will shit itself.

Now let's ask ourselves how much of the text in the world is "well-written"?

Or even better, you could apply this to Copilot. How much code in the world is good code? The answer is fucking none, mate.

[–] pikesley@mastodon.me.uk 5 points 2 months ago (2 children)
load more comments (2 replies)
load more comments
view more: next ›