this post was submitted on 09 Aug 2023
380 points (100.0% liked)

Technology

37742 readers
73 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.

you are viewing a single comment's thread
view the rest of the comments
[–] BlameThePeacock@lemmy.ca 17 points 1 year ago (36 children)

A human is a derivative work of its training data, thus a copyright violation if the training data is copyrighted.

The difference between a human and ai is getting much smaller all the time. The training process is essentially the same at this point, show them a bunch of examples and then have them practice and provide feedback.

If that human is trained to draw on Disney art, then goes on to create similar style art for sale that isn't a copyright infringement. Nor should it be.

[–] lostmypasswordanew@feddit.de 8 points 1 year ago (14 children)

Humans and AI are not the same and an equivalence should never be drawn.

[–] BlameThePeacock@lemmy.ca 2 points 1 year ago (13 children)

Your feelings don't really matter, the fact of the matter is that the goal of ai is literally to replicate the function of a human brain. The way we're building them is often mimicking the same processes.

[–] Zapp 3 points 1 year ago

The goal of AI is fictional, and there's no solid evidence today that it will ever stop being fiction.

What at have today are stupid learning algorithms that are surprisingly good at mimicing intelligent people.

The most apt comparison today is a particularly clever parrot.

I'm all for having the discussion about how to handle AI when we have it, but it's bad faith to apply it to what we have today.

Critically, what we have today will never ever go on strike, or really make any kind of correct moral decision on it's own. We must treat it like dumb automation, because it is dumb automation.

load more comments (12 replies)
load more comments (12 replies)
load more comments (33 replies)