this post was submitted on 12 Sep 2023
152 points (100.0% liked)

Technology

37735 readers
45 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BotCheese 7 points 1 year ago (2 children)

And we're nowhere near dome scalimg LLM's

I think we might be, I remember hearing openAI was training on so much literary data that they didn't and couldn't find enough for testing the model. Though I may be misrememberimg.

[–] newde@feddit.nl 5 points 1 year ago (1 children)

No that's definitely the case. However, Microsoft is now working making LLM's more dependent on several high quality sources. For example: encyclopedias will be more important sources than random reddit posts.

[–] HobbitFoot@thelemmy.club 2 points 1 year ago (1 children)

Microsoft is also using LinkedIn to help as well, getting users to correct articles generated by AI.

[–] Zaktor@sopuli.xyz 2 points 1 year ago

Cunningham's Law may be very helpful in this respect.

"the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer."

load more comments (1 replies)