s3p5r

joined 1 month ago
[–] s3p5r@lemm.ee 4 points 18 hours ago

That has also always been my gut feel about Carmack, but it still sucks to see the evidence. I wish that gut feeling would stop being so dammed accurate, but it gets a lot of practise.

Doom was definitely christofascist fantasy porn. At least Quake you were defending invasion from the most literal manifestation of eugenicist Space-Nazis possible. Yes I am choosing to disregard the inherent US military fetishism because I don't want to ruin my formative media which I deep down always knew was problematic.

sigh

Can I at least keep the soundtracks as pleasant and untarnished memories?

[–] s3p5r@lemm.ee 3 points 18 hours ago

Ugh. Thanks, yeah that's good enough for me without even opening xcancel. My search for "Tim Sweeney conservative" only dredged up his land conservation purchases and the "stop being so divisive" / "no politics in art" dogwhistles which had previously made me suspicious, but I had mostly forgotten about. I quit Twitter many years ago so I missed that whole knobslobbering saga and didn't think to include Musk after skimming today's shitty Google "search" results.

Ah fuck, and Carmack too? Goddamn it. Twist the knife a little harder.

Fucking tech bros, always ruining tech.

[–] s3p5r@lemm.ee 4 points 1 day ago (4 children)

owned by a right-wing asshole

Wait, what? Can I get some info or even just the right search terms to force Google to give me useful info? I know he's done the eye-roll-worthy "no politics in my artform" bullshit but if there's more I've missed, I'm keen to know.

[–] s3p5r@lemm.ee 5 points 2 weeks ago

Yeah, that works for me. I'll check out some more of them. Thanks!

[–] s3p5r@lemm.ee 4 points 2 weeks ago (2 children)

Borked link. Possibly unthrottled invidious version

I prefer less pop and bop in my industrial, but I am glad to see anybody else still enjoying this with the word industrial in it.

[–] s3p5r@lemm.ee 17 points 2 weeks ago

I don't toil in the mines of the big FAANG, but this tracks with what I've been seeing in my mine. I also predict it will end with lay-offs and companies collapsing.

Zitron thinks a lot about the biggest companies and how it will ultimately hurt them, which is reasonable. But, I think it ironically downplays the scale of the bubble, and in turn, the impacts of it bursting.

The expeditions into OpenAI's financials have been very educational. If I were an investigative reporter, my next move would be to look at the networks created by venture capitalists and what is happening inside the companies who share the same patrons as Open AI. I don't say that as someone who interacts with finances, just as someone who carefully watches organizational politics.

[–] s3p5r@lemm.ee 8 points 2 weeks ago

If only all my snark could elicit such absurd perfection.

[–] s3p5r@lemm.ee 24 points 4 weeks ago (6 children)

So long as you don't care about whether they're the right or relevant answers, you do you, I guess. Did you use AI to read the linked post too?

[–] s3p5r@lemm.ee 7 points 1 month ago (1 children)

References weren't paywalled, so I assume this is the paper in question:

Hofmann, V., Kalluri, P.R., Jurafsky, D. et al. AI generates covertly racist decisions about people based on their dialect. Nature (2024).

Abstract

Hundreds of millions of people now interact with language models, with uses ranging from help with writing^1,2^ to informing hiring decisions^3^. However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans^4,5,6,7^. Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement^8,9^. It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models’ overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.