this post was submitted on 29 Jun 2023
49 points (100.0% liked)

Technology

37735 readers
55 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 6 comments
sorted by: hot top controversial new old
[–] Plume 5 points 1 year ago

I mean, it's an issue in general when talking about those big subjects. It's like global warming, we keep talking about it as a future risk, but really, the crisis is already there. Talking about it as a future problem is just a good way to keep ignoring it... :/

[–] CanadaPlus@lemmy.sdf.org 3 points 1 year ago* (last edited 1 year ago) (1 children)

Ah yes, the old AI alignment vs. AI ethics slapfight.

How about we agree that both are concerning?

[–] lemmyng 4 points 1 year ago (2 children)

Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it's largely human-generated. This balance is about to shift, and it's going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We're talking the equivalent of the burning of Alexandria for linguistics research.

[–] wet_lettuce 2 points 1 year ago

undefined> Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it’s largely human-generated. This balance is about to shift, and it’s going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We’re talking the equivalent of the burning of Alexandria for linguistics research.

It reminds me of the situation with steel where post atomic weapons its tainted. It can't be used for scientific tools or equipment. You have to find and use it from pre-atomic bombs. https://en.wikipedia.org/wiki/Low-background_steel

There is going to be "low background data" in the future.

is ai generated content such a large part of the internet already?

[–] Kwakigra 2 points 1 year ago

When I saw how coherent and consistent conservative style posting was on the chatgpt2 model, it made me wonder how many online conversations I've had over the preceding years were actually with an algorithim spitting platitudes and cliches exactly how a real life conservative would.

load more comments
view more: next ›