this post was submitted on 13 Oct 2024
111 points (100.0% liked)

Technology

37735 readers
45 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Microsoft's LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that's inaccurate or misleading.

[...]

]The relevant passage, which takes effect on November 20, 2024, reads:

Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.

In short, LinkedIn will provide features that can produce automated content, but that content may be inaccurate. Users are expected to review and correct false information before sharing said content, because LinkedIn won't be held responsible for any consequences.

you are viewing a single comment's thread
view the rest of the comments
[–] 42Firehawk@lemmynsfw.com 2 points 1 month ago

Stronger guardrails can help, sure. But getting new input and building a new model is the equivalent of replacing the entire vending machine with a different model by the same company if one is failing (by the old analogy).

The problem is that if you do the same thing with a llm for hiring or job systems, then the failure and bias instead is from the model being bigoted, which while illegal, is hidden in a model that is basically trained on how to be a more effective bigot.

You can't hide your race from the llm that was accidentally trained to know what job histories are traditionally black, or anything else.