this post was submitted on 29 Jun 2023
31 points (100.0% liked)

Technology

37737 readers
47 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

There is huge excitement about ChatGPT and other large generative language models that produce fluent and human-like texts in English and other human languages. But these models have one big drawback, which is that their texts can be factually incorrect (hallucination) and also leave out key information (omission).

In our chapter for The Oxford Handbook of Lying, we look at hallucinations, omissions, and other aspects of “lying” in computer-generated texts. We conclude that these problems are probably inevitable.

you are viewing a single comment's thread
view the rest of the comments
[–] snake_case@feddit.uk 13 points 1 year ago (5 children)

If you're looking for a factual response from chat GPT you're using it wrong. It's designed to produce text that looks correct. It's not a replacement for Google or indeed proper research. For more on this watch the leagle eagle video on chat gpt case: https://youtu.be/oqSYljRYDEM

[–] trachemys@iusearchlinux.fyi 4 points 1 year ago

But it sure knows how to sound like it is authoritative. Very convincing.

[–] bionicjoey@lemmy.ca 2 points 1 year ago

Unfortunately, fucking everyone is treating it like it knows things.

[–] Dr_Cog@mander.xyz 2 points 1 year ago

It's decent for parsing text, given you are careful about the prompt generation.

I am exploring it's use in assessing speech-based cognitive assessments, and so far it is pretty accurate

[–] Karlos_Cantana@sopuli.xyz 1 points 1 year ago

I was trying to use it to fix coding, but I could never get it to work right.

[–] Spudger@lemmy.sdf.org 1 points 1 year ago

I'm not using ChatGPT at all.