this post was submitted on 10 Jul 2023
220 points (100.0% liked)

Technology

37725 readers
45 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] nothacking@discuss.tchncs.de 23 points 1 year ago* (last edited 1 year ago) (3 children)

if asked by a user prompts chatGPT to summarize a copyrighted book, it will do so.

So will a human. Let's stop extending copyright law. Also, how you know it read the book, and not a summary of it, of which there are loads on the internet?

[–] SpaceToast@mander.xyz 18 points 1 year ago (3 children)

This is why I am pro AI art. It’s no different than a human taking inspiration from other work.

Nobody comes up with anything truly original. It’s all inspired by someone before them.

[–] AndrewZabar 23 points 1 year ago (2 children)

I don’t know how anyone is pro AI anything other than the pigs making money from it. Only bad can result of it. And will.

[–] SpaceToast@mander.xyz 14 points 1 year ago (1 children)

I don’t know how anyone can be anti AI.

It’s just a tool. To say that only bad can result of it is a bold claim that doesn’t make any sense.

Can you provide an example?

[–] AndrewZabar 9 points 1 year ago

Just wait and see.

[–] Double_A@discuss.tchncs.de 3 points 1 year ago (1 children)

Only bad can result from it, just because some company is making profits?

[–] AndrewZabar 4 points 1 year ago

No, that wasn’t a correlation. Only bad can result from it. Also, companies making profit love it. Separate things.

[–] SinJab0n@mujico.org 11 points 1 year ago

I'm not anti AI, I'm against companies making profit out of other peoples work without paying them.

[–] can 6 points 1 year ago (1 children)
[–] PipedLinkBot@feddit.rocks 3 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/X9RYuvPCQUA

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[–] Dominic 6 points 1 year ago

Also, how you know it read the book, and not a summary of it, of which there are loads on the internet?

In the case of ChatGPT, it's hard to tell. OpenAI won't even reveal what their training dataset was.

Researchers have done some tests to tease this out, and they're pretty confident that it has read quite a few books and memorized them verbatim. See one of my favorite papers in a while, Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4.

[–] Fauxreigner 6 points 1 year ago* (last edited 1 year ago) (1 children)

Beyond that, it'll try to summarize a book, but it often can't do so successfully, although it will act like it has. Give it a try on something that is even a little bit obscure and it can't really give you good information. I tried with Blindsight, which is not something that's in the popular culture, but also a Hugo nominee, so not completely obscure. It knew who the characters were, and had a general sense of the tone, but it completely fabricated every major plot point that I asked about. Did the same with A Head Full of Ghosts, which is more well known but still not something everyone has read, and it did the same thing.

One thing I found that's really fun is to ask it a question and then follow up with something like "Are you sure about that?" It'll almost always correct itself and make up something else. It'll go one step further and incorporate details you ask about. Give it a prompt like "Are you sure this character died of natural causes? I thought they were killed by Bob" and it will very frequently say you're right and make up a story along those lines that's plausible within the text. It doesn't work on really popular stuff; you can't convince it that Optimus Prime saves Luke Skywalker in RotJ, but anything even a little less well known and it'll tell you details that it's making up whole cloth with complete confidence.

[–] nothacking@discuss.tchncs.de 2 points 1 year ago (1 children)

Another highly amusing thing to do is to ask it about non existent chemicals or antenna types. (Try "inverted tripole" or "dinitrogen azide") It always generates plausible but incorrect answers (eloquent bullshit).

[–] Fauxreigner 1 points 1 year ago

My experience is that it correctly identified that inverted tripole and dinitrogen azide don't exist, but YMMV.