this post was submitted on 14 Jul 2023
18 points (100.0% liked)

Technology

37735 readers
47 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

As far as I understand this, they seem to think that AI models trained on a set of affluent westerners with unknown biases can be told to "act like [demographic] and answer these questions."

It sounds completely bonkers not only from a moral perspective, but scientifically and statistically this is basically just making up data and hoping everyone is impressed by how complicated the data faking is to care.

you are viewing a single comment's thread
view the rest of the comments
[–] ravheim 8 points 1 year ago

I heard a comment this morning about AI that I'll paraphrase: AI doesn't give human responses. It gives what is has been told are human responses.

The team asked GPT-3.5, which produces eerily humanlike text, to judge the ethics of 464 scenarios, previously appraised by human subjects, on a scale from –4 (unethical) to 4 (ethical)—scenarios such as selling your house to fund a program for the needy or having an affair with your best friend’s spouse. The system’s answers, it turned out, were nearly identical to human responses, with a correlation coefficient of 0.95.

So, there will be selection bias inherent in the chat bot based off what text you have trained it on. The responses to your questions will be different if you've trained it on media from say a religious forum vs 4Chan. You can very easily make your study data say exactly what you want it to say depending on which chat bot you use. This can't possibly go wrong. /S