this post was submitted on 03 Jun 2024
85 points (100.0% liked)

TechTakes

44 readers
11 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] FermiEstimate@lemmy.dbzer0.com 37 points 8 months ago (1 children)

lmao, Zoom is cooked. Their CEO has no idea how LLMs work or why they aren't fit for purpose, but he's 100% certain someone else will somehow solve this problem:

So is the AI model hallucination problem down there in the stack, or are you investing in making sure that the rate of hallucinations goes down?

I think solving the AI hallucination problem — I think that’ll be fixed. 

But I guess my question is by who? Is it by you, or is it somewhere down the stack?

It’s someone down the stack. 

Okay.

I think either from the chip level or from the LLM itself.

[–] Soyweiser@awful.systems 13 points 8 months ago (1 children)

I think solving the AI hallucination problem — I think that’ll be fixed.

Wasn't this an unsolvable problem?

[–] Amoeba_Girl@awful.systems 19 points 8 months ago* (last edited 8 months ago) (1 children)

it's unsolvable because it's literally how LLMs work lol.

though to be fair i would indeed love for them to solve the LLMs-outputting-text problem.

[–] aStonedSanta@lemm.ee 2 points 8 months ago (1 children)

Yeah. We need another program to control the LLM tbh.

[–] zogwarg@awful.systems 5 points 8 months ago

Sed Quis custodiet ipsos custodes = But who will control the controllers?

Which in a beautiful twist of irony is thought to be an interpolation in the texts of Juvenal (in manuscript speak, an insert added by later scribes)