this post was submitted on 13 Oct 2024
111 points (100.0% liked)
Technology
37735 readers
45 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
As IBM said in 1979, computers aren't accountable, and I would go further and say they should never make any meaningful decision. The algorithm used doesn't really make a difference. The sooner people understand that they are responsible for what they do with computers (like any other tool) the better.
The real question is, what if you commission a work from another, and they make you something in a completely automated way. Let's say a vending machine. Are you responsible for what the vending machine does if you use it as it's supposed to be used? Or is it the owner of the machine?
Why is it different for LLM text generators?
If I commission a vending machine, get one that was made automatically and runs itself, and I set it up and let it operate in my store, then I am responsible if it eats someone's money without giving them their item, giving the wrong thing, or dispensing dangerous products.
This has already been decided, and it's why you can open up and fix them, and each mechanism is controlled.
A llm making business decisions has no such control or safety mechanisms.
I wouldn't say that - there's nothing preventing them from building in (stronger) guardrails and retraining the model based on input.
If it turns out the model suggests someone killing themselves based on very specific input, do you not think they should be held accountable to retrain the model and prevent that from happening again?
From an accountability perspective, there's no difference from a text generator machine and a soda generating machine.
The owner and builder should be held accountable and thereby put a financial incentive on making these tools more reliable and safer. You don't hold Tesla not accountable when their self driving kills someone because they didn't test it enough or build in enough safe guards – that'd be insane.
Stronger guardrails can help, sure. But getting new input and building a new model is the equivalent of replacing the entire vending machine with a different model by the same company if one is failing (by the old analogy).
The problem is that if you do the same thing with a llm for hiring or job systems, then the failure and bias instead is from the model being bigoted, which while illegal, is hidden in a model that is basically trained on how to be a more effective bigot.
You can't hide your race from the llm that was accidentally trained to know what job histories are traditionally black, or anything else.