this post was submitted on 29 Jun 2023
31 points (100.0% liked)

Technology

37738 readers
47 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

There is huge excitement about ChatGPT and other large generative language models that produce fluent and human-like texts in English and other human languages. But these models have one big drawback, which is that their texts can be factually incorrect (hallucination) and also leave out key information (omission).

In our chapter for The Oxford Handbook of Lying, we look at hallucinations, omissions, and other aspects of “lying” in computer-generated texts. We conclude that these problems are probably inevitable.

you are viewing a single comment's thread
view the rest of the comments
[–] Lowbird 2 points 1 year ago (1 children)

People already use "lying" when talking about other confusing inanimate objects/interfaces that don't have motivations (ignoring the motivations of their creators that may seep through). Examples:

"Google maps lied to me and I thought I was two blocks west of where I was."

"The fuel gauge says the tank is empty but that's a lie."

"These chip bags look full but it's all lies, they're just full of air."

It's harder to think of examples for "hallucinate", though people do describe things as " halluconatory" or "an acid trip" and so on.

I think even in world where everyone understood that LLM's do not think or understand, and where everyone understood how they generate their results, I think people would still talk like this.

I understand the frustration, but also it seems as tall an order to me as asking people not to personify their desktop computers and phones and other inanimate objects, or to not apply pronouns other than 'it' to stuffed animals and dolls. This kind of personification and personifying metaphoric language is one of those things humans are just inclined to so, however inconveniently in this case, imo.

[–] furrowsofar 1 points 1 year ago

I am really more worried about the researchers and the product people getting ahead of themselves on one hand and on the other people not understanding the huge limitations of these things at the moment. Essentially people not being skeptical when they use the tech.