this post was submitted on 06 Sep 2023
37 points (100.0% liked)

Technology

37735 readers
45 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] Butterbee 43 points 1 year ago (2 children)

I'm 100% for more regulation and more caution but I don't understand using the "robots" terminology unless you are trying to conflate it with AI.. which absolutely does need regulation and caution. There's a lot of harm that they can do. But early on in the article to prove "just how long robots have been killing humans" they describe an accident at a manufacturing facility that uses those robotic arms. And a man was told to enter the machine to adjust something while it was running. You know, they did that back in the cotton mills too. Had children run under the running machinery to collect fallen cotton. And children died in the mills when the machines would come smashing into them.

Did we come to the conclusion that cotton mills are a menace? Yes. Well, not because the machines were out of control right? The people running the mill and telling people to move into the dangerous machinery were the menace. And it's the exact same for the opening argument in this article. Someone was instructed to do something dangerous that they should never have been asked to do, and they died.

So yes, let's be careful about new tech. But let's be more careful about how business owners will abuse the people around the tech. Please.

[–] realChem 10 points 1 year ago

Agreed. Strong (and effectively enforced) worker protections are just as important as tech-specific safety regulations. Nobody should feel like they need to put themselves into a risky situation to make work happen faster, regardless of whether their employer explicitly asks them to take that risk or (more likely) uses other means like unrealistic quotas to pressure them indirectly.

There are certainly ways to make working around robots safer, e.g. soft robots, machine vision to avoid unexpected obstacles in the path of travel, inherently limiting the force a robot can exert, etc... And I'm all for moving in the direction of better inherent safety, but we also need to make sure that safer systems don't become an excuse for employers to expose their workers to more risky situations (i.e. the paradox of safety).

[–] admiralteal@kbin.social 4 points 1 year ago

The older I get the more I realize the luddites & saboteurs were definitely on to something.

[–] lonewalk@lemm.ee 24 points 1 year ago* (last edited 1 year ago) (2 children)

This just feels like non-technical fear mongering. Frankly, the term “AI” is just way too overused for any of this to be useful - Autopilot, manufacturing robots, and ChatGPT are all distinct systems that have their own concerns, tradeoffs, regulatory issues, etc. and trying to lump them together reduces the capacity for discussion down to a single (not very useful, imo) take

editing for clarity: I’m for discussion of more regulation and caution, but conflating tons of disparate technologies still imo muddies the waters of public discussion

[–] neptune@dmv.social 8 points 1 year ago (2 children)

If you read the article, the concern is how those disparate technologies are converging.

You get the picture. Robots—“intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are already on American streets, and robotic “dogs” are being used by law enforcement. Computerized systems are being given the capabilities to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, superintelligent program when more immediate problems are at our doorstep? Regulation must push companies toward safe innovation and innovation in safety. We are not there yet.

[–] lonewalk@lemm.ee 3 points 1 year ago

I read the article, and stand by my statement - “AI” does not apply to self driving cars the same way as robotics use by law enforcement. These are two separate categories of problems where I don’t see how some unified frustration at AI or robotics applies.

Self driving cars have issues because the machine learning algorithms used to train them are not sufficient to navigate the complexities of roads, and there is no human fallback. (See: autopilot)

Robotics use by law enforcement has issues because it removes a human factor to enforcement, which has concerns of whether any deadly force is ever justified when used (does a suspect pose a danger to any officer if there is no human contact?), and worries of dehumanization exist here, as well as other factors like data collection. These aren’t even self driving mostly, from what I understand law enforcement remote pilots them.

these are separate problem spaces and aren’t deadly in the same ways, aren’t unattractive in the same ways, and should be treated and analyzed as distinct problems. by reducing to “AI” and “robots” you create a problem that makes sense only to the technically uninclined, and blurs any meaningful discussion about the precisions of each issue.

[–] yetAnotherUser@feddit.de 2 points 1 year ago

AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk,” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days.

What would've been high risk? Well:

In one section of the White Paper OpenAI shared with European officials at the time, the company pushed back against a proposed amendment to the AI Act that would have classified generative AI systems such as ChatGPT and Dall-E as “high risk” if they generated text or imagery that could “falsely appear to a person to be human generated and authentic.”

That does make sense, considering ELIZA from the 60s would fit this description. It pretty much repeated what you wrote to it in a different style.

I don't see how generative AI can be considered high risk when it's literally just fancy keyboard autofill. If a doctor asks ChatGPT what the correct dose of medication for a patient is, it's not ChatGPT which should be considered high risk but rather the doctor.

[–] sub_ubi@lemmy.ml 4 points 1 year ago* (last edited 1 year ago)

muddies the waters of public discussion

Isn't that The Atlantic's MO?

[–] sculd 6 points 1 year ago

There are lots of click bait articles on the Atlantic. Occasionally they put out good stuff but most are just hyperbolic articles that would look silly in a few years time.