this post was submitted on 29 Jun 2023
262 points (100.0% liked)
Reddit Migration
458 readers
1 users here now
### About Community Tracking and helping #redditmigration to Kbin and the Fediverse. Say hello to the decentralized and open future. To see latest reeddit blackout info, see here: https://reddark.untone.uk/
founded 1 year ago
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Right now, we can already recognize lower quality bots within conversation. AI generated "art" is already very distinct to everyone to the point almost nobody misses it.
Language is a human instinct. Our minds create it, we can use it in all sorts of ways, bend it to our will however we want.
By the time bots become good enough to be indistinguishable online, they'll either be actually worth talking to, or they will simply be another corporate shill.
I was wondering about this myself. If a bot presents a good argument that promotes discussion, is the presence of a bot automatically bad?
I don’t love that right now, the focus is on eliminating or silencing the voice of bots, because as you point out, they’re going to be indistinguishable from human voices soon - if they aren’t already. In the education space, we’re already dealing with plagiarism platforms incorrectly claiming real student work is written by ChatGPT. Reading a viewpoint you disagree with and immediately jumping to “bot!” only serves to create echo chambers.
I think it’s better and safer long term to educate people to think critically, assume good intent, know their boundaries online (ie, don’t argue when you can’t be coherent about it and have to devolve to name calling, etc), and focus on the content and argument of the post, not who created it - unless it’s very clear from a look at their profile that they’re arguing in bad faith or astroturfing. A shitty argument won’t hold up to scrutiny, and you don’t have the risk of silencing good conversation from a human with an opposing viewpoint. Common agreement on community rules such as “no hate speech” or limiting self-promotion/review/ads to certain spaces and times is still the best and safest way to combat this, and from there it’s a matter of mods enforcing the boundaries on content, not who they think you are.
Because bots don't think. They exist solely to push an agenda on behalf of someone.
If the people involved in the conversation are there because they are intending to have a conversation with people, yes, it's automatically bad. If I want to have a conversation with a chatbot, I can happily and intentionally head over to ChatGPT etc.
Bots are not inherently bad, but I think it's imperative that our interactions with them are transparent and consensual.
Part of the problem is that bots unfairly empower the speech of those with resources to dominate and dictate the conversation space, even in good effort, it disempowers everyone else. Even the act ofseeing the same ideas over and over can sway whole zeitgeists. Now imagine what bots cab do by dictating the bulk of what's even talked about at all.
Ai generated art is actually not as distinct as you think. A lot of low quality stuff is, but studies have shown humans already can't tell the difference between real pictures and ai generated ones