this post was submitted on 02 Jul 2023
324 points (100.0% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

1445 readers
6 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS
 

cross-posted from: https://lemmy.intai.tech/post/43759

cross-posted from: https://lemmy.world/post/949452

OpenAI's ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models

you are viewing a single comment's thread
view the rest of the comments
[–] argv_minus_one 9 points 1 year ago (1 children)

We can simulate all manner of physics using a computer, but we can't simulate a brain using a computer? I'm having a real hard time believing that. Brains aren't magic.

[–] fiasco@possumpat.io 2 points 1 year ago (1 children)

Computer numerical simulation is a different kind of shell game from AI. The only reason it's done is because most differential equations aren't solvable in the ordinary sense, so instead they're discretized and approximated. Zeno's paradox for the modern world. Since the discretization doesn't work out, they're then hacked to make the results look right. This is also why they always want more flops, because they believe that, if you just discretize finely enough, you'll eventually reach infinity (or infinitesimal).

This also should not fill you with hope for general AI.

[–] argv_minus_one 1 points 1 year ago

The same argument could be made for sound, and yet digital computers have no problem approximating it to sufficient precision as to make it indistinguishable from the original.

Discretization works fine and is not the problem. The problem is that the “AI” everyone's so hyped up about is nothing more than a language model. It has no understanding of what it's talking about because it has not been taught or allowed to experience anything other than language. Humans use language to express ideas; language-model AIs have no ideas to express because they have no life experience from which to form any ideas.

That doesn't mean AGI is impossible. It is likely infeasible on present-day hardware, but that's not the same thing as being impossible.