this post was submitted on 18 Jun 2023
71 points (100.0% liked)

World News

22055 readers
6 users here now

Breaking news from around the world.

News that is American but has an international facet may also be posted here.


Guidelines for submissions:

These guidelines will be enforced on a know-it-when-I-see-it basis.


For US News, see the US News community.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Floppy 35 points 1 year ago (9 children)

Thing is, this isn’t AI causing the problem. It’s humans using it in incredibly dumb irresponsible ways. Once again, it’ll be us that do ourselves in. We really need to mature as a species before we can handle this stuff.

[–] MagicShel@programming.dev 11 points 1 year ago (6 children)

I mean I won't disagree with you but I think a more fundamental issue is that we are so easy to lie to. I'm not sure it matters whether the liar is an AI, a politician, a corporation, or a journalist. Five years ago it was a bunch of people in office buildings posting lies on social media. Now it will be AI.

In a way, AI could make lie detection easier by parsing posting history for contradictions and fabrications in a way humans could never do on their own. But whether they are useful/used for that purpose is another question. I think AI will be very useful for processing and summarizing vast quantities of information in ways other than statistical analysis.

[–] Leeks@kbin.social 1 points 1 year ago (1 children)

AI is only as good as the model it is trained on, so while there are absolute truths, like most scientific constants, there are also relative truths, like “the earth is round” (technically it’s irregularly shaped ellipsoid, not “round”), but the most dangerous “truth” is the Mandela effect, which would likely enter the AI’s training model due to human error.

So while an AI bot would be powerful, depending on the how tricky it is to create training data, it could end up being very wrong.

[–] MagicShel@programming.dev 1 points 1 year ago

I didn't mean to imply the AI would detect truth from lies, I meant it could analyze a large body of text to extract the messaging for the user to fact check. Good propaganda has a way of leading the audience along a particular thought path so that the desired conclusion is reached organically by the user. By identifying "conclusions" that are reached by leading /misleading statements AI could help people identify what is going on to think more critically about the subject. It can't replace the critical thinking step, but it can provide perspective.

load more comments (4 replies)
load more comments (6 replies)