this post was submitted on 11 Jan 2024
17 points (100.0% liked)
Hacker News
85 readers
1 users here now
This community serves to share top posts on Hacker News with the wider fediverse.
Rules
0. Keep it legal
- Keep it civil and SFW
- Keep it safe for members of marginalised groups
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I agree too much with the text to comment anything meaningful about it. So let's see the comments...
Both comments reminded me a blogpost that I wrote more than a year ago, regarding chatGPT-3. It still applies rather well to 2024 LLMs, and it shows what those two tech bros are missing, so I'll copypaste it here.
###The problem with GPT3.
Consider the following two examples.
Example A.
GPT3 bots trained on the arsehole of the internet (Reddit), chatting among themselves:
The grammar is fine, and yet those messages don’t say jack shit.
Example B.
Human translation made by someone with not-so-good grasp of the target language.
The grammar is so broken that this excerpt became a meme. And yet you can still retrieve meaning from it:
What’s the difference? It's purpose. In (B) we can give each utterance a purpose, even if the characters are fictional - because they were written by a human being. However, we cannot do the same in (A), because the current AI-generated text does not model that purpose.
And yes, assigning purpose to your utterances is part of the language. Not just what tech bros are able to see, namely: syntax, morphology, and spelling.