this post was submitted on 28 Mar 2025
126 points (100.0% liked)

Technology

38447 readers
87 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

An examination of a large number of ChatGPT responses found that the model consistently exhibits values aligned with the libertarian-left segment of the political spectrum. However, newer versions of ChatGPT show a noticeable shift toward the political right. The paper was published in Humanities & Social Sciences Communications.

Large language models (LLMs) are artificial intelligence systems trained to understand and generate human language. They learn from massive datasets that include books, articles, websites, and other text sources. By identifying patterns in these data, LLMs can answer questions, write essays, translate languages, and more. Although they don’t think or understand like humans, they predict the most likely words based on context.

Often, the responses generated by LLMs reflect certain political views. While LLMs do not possess personal political beliefs, their outputs can mirror patterns found in the data they were trained on. Since much of that data originates from the internet, news media, books, and social media, it can contain political biases. As a result, an LLM’s answers may lean liberal or conservative depending on the topic. This doesn’t mean the model “believes” anything—it simply predicts words based on previous patterns. Additionally, the way a question is phrased can influence how politically slanted the answer appears.

you are viewing a single comment's thread
view the rest of the comments
[–] ByteOnBikes@slrpnk.net 38 points 5 days ago (13 children)

My root issue with people who shit on AI is that by pretending like it doesn't exist or refusing to use it, your voice is not part of the conversation.

The world will use AI, regardless of your personal feelings.

And if you arent in the room to help shape decisions, don't be surprised when we are fucked.

[–] Butterbee 64 points 5 days ago

I don't think I'm quite the target of this comment because I don't pretend it doesn't exist or that it isn't something that people use or care about. But I do avoid using AI for anything that matters.. like at all. I also refuse to use chatgpt or any of the commercial offerings. I do play around with locally hosted stuff because the tech is interesting, but there is no way I would ever put my faith in it. Also, my feelings toward the biggest players in the industry are at best "disgust". So I don't want to use their products.

So my question then how exactly can I be in the room to help shape decisions? The only two levers I have to pull is whether or not to send those companies money, and whether to even use the free services or not.

What exactly am I supposed to do? The companies will just make whatever models and tune them however they feel will achieve whatever goals they have regardless. And presently, their goals do not align with mine.

I won't be surprised when we are fucked. I promise you that. I can see that coming. But I feel utterly powerless to stop the tech fascists.

load more comments (12 replies)