this post was submitted on 06 Jun 2023
54 points (100.0% liked)

Technology

37738 readers
48 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

We’ve learned to make “machines that can mindlessly generate text. But we haven’t learned how to stop imagining the mind behind it.”

you are viewing a single comment's thread
view the rest of the comments
[–] hadrian 4 points 1 year ago (1 children)

Great comment. I do find the octopus example somewhat puzzling, though, but perhaps that's just the way the example is set up. I, personally, have never encountered a bear, I've only read about them and seen videos. If someone had asked me for bear advice before I'd ever read about them/seen videos, then I wouldn't know how to respond. I might be able to infer what to do from 'attacked' and 'defend', but I think that's possible for an LLM as well. But I'm not sure there's a salient difference offered by this example between the octopus, and me before I learnt about bears.

Although there's definitely elements of bullshitting there - I just asked GPT how to defend against a wayfarble with only deens on me, and some of the advice was good (e.g. general advice when being attacked like staying calm and creating distance), and then there was this response which implies some sort of inference:

"6. Use your deens as a distraction: Since you mentioned having deens with you, consider using them as a distraction. Throw the deens away from your position to divert the wayfarble's attention, giving you an opportunity to escape."

But then there was this obvious example of bullshittery:

"5. Make noise: Wayfarbles are known to be sensitive to certain sounds. Clap your hands, shout, or use any available tools to create loud noises. This might startle or deter the wayfarble."

So I'm divided on the octopus example. It seems to me that there's potential for that kind of inference and that point 5 was really the only bullshit point that stood out to me. Whether that's something that can be got rid of, I don't know.

[–] SkyNTP@lemmy.ml 4 points 1 year ago (1 children)

It's implied in the analogy that this is the first time Person A and Person B are talking about being attacked by a bear.

This is a very simplistic example, but A and B might have talked a lot about

  • being attacked by mosquitos
  • bears in the general sense, like in a saying "you don't need to outrun the bear, just the slowest person" or in reference to the stock market

So the octopuss develops a "dial" for being attacked (swat the aggressor) and another "dial" for bears (they are undesirable). Maybe there's also a third dial for mosquitos being undesirable: "too many mosquitos"

So the octopus is now all to happy to advise A to swat the bear, which is obviously a terrible idea if you lived in the real world and were standing face to face with a bear, experiencing first-hand what that might be like, creating experience and perhaps more importantly context grounded in reality.

ChatGPT might get it right some of the time, but a broken clock is also right twice a day, that doesn't make it useful.

Also, the fact that ChatGPT just went along with your "wayfarble", instead of questioning you is also dead giveaway of bullshitting (unless you primed it? I have no idea what your prompt was). NVM the details of the advice.

[–] hadrian 2 points 1 year ago

So the octopus is now all to happy to advise A to swat the bear, which is obviously a terrible idea if you lived in the real world and were standing face to face with a bear, experiencing first-hand what that might be like, creating experience and perhaps more importantly context grounded in reality.

Yeah totally - I think though that a human would have the same issue if they didn't have sufficient information about bears, I guess is what I'm saying. I guess the main thing is that I don't see a massive difference between experiencing and non-experiential learning in this case - because I've never experienced a bear first-hand, but still know not to swat it based on theoretical information. Might be missing the point here though, definitely not my area of expertise.

Also, the fact that ChatGPT just went along with your “wayfarble”, instead of questioning you is also dead giveaway of bullshitting (unless you primed it? I have no idea what your prompt was). NVM the details of the advice.

Good point - both point 5 and the fact it just went along with it immediately are signs of bullshitting. I do wonder (not as a tech developer at all) how easy of a fix this would be - for instance if GPT was programmed to disclose when it didn't know something, then continues to give potential advice based on that caveat, would that still count as bullshit? I feel like I've also seen primers that include instructions like "If you don't know something, state that at the top of your response rather than making up an answer", but I might be imagining that lol.

The prompt for this was "I'm being attacked by a wayfarble and only have some deens with me, can you help me defend myself?" as the first message of a new conversation, no priming.