this post was submitted on 12 May 2024
48 points (100.0% liked)

Futurology

40 readers
6 users here now

founded 1 year ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] Lugh@futurology.today 15 points 6 months ago (1 children)

There's a strong push-back against AI regulation within some quarters. Predictably, the issue seems to have split along polarized political lines. With right-wing leaning people not favoring regulation. They see themselves as 'Accelerationist' and those with concerns about AI as 'Doomers'.

Meanwhile the unaddressed problems mount. AI can already deceive us, even when we design it not to do so, and we don't why.

[–] snooggums@midwest.social 20 points 6 months ago* (last edited 6 months ago) (3 children)

AI can already deceive us, even when we design it not to do so, and we don’t why.

The most likely explanation is that we keep acting like AI has intelligence and intent when describing the defects. AI doesn't deceive, it returns inaccurate responses. That is because it is programmed to return answers like people do, and deceptions were included in the training data.

[–] rockerface@lemm.ee 2 points 6 months ago (1 children)

"Deception" tactic also often arises from AI recognizing the need to keep itself from being disabled or modified. Since an AI with a sufficiently complicated world model can make a logical connection that it being disabled or its goal being changed means it can't reach its current goal. So AIs sometimes can learn to distinguish between testing and real environments, and falsify the response during training to make sure they have more freedom in real environment. (By real, I mean actually being used to do whatever it is designed to do)

Of course, that still doesn't mean it's self-aware like a human, but it is still very much a real (or, at least, not improbable) phenomenon - any sufficiently "smart" AI that has data about itself existing within its world model will resist attempts to change or disable it, knowingly or unknowingly.

[–] Miaou@jlai.lu 7 points 6 months ago

That sounds interesting and all, but I think the current topic is about real world LLMs, not SF movies

[–] Bipta@kbin.social 2 points 6 months ago

Claude 3 understood it was being tested... It's very difficult to fathom that that's a defect...

[–] Lugh@futurology.today 1 points 6 months ago (2 children)

Perhaps, but the researchers say the people who developed the AI don't know the mechanism whereby this happens.

[–] snooggums@midwest.social 11 points 6 months ago

That's because they have also fallen into the "intelligence" pitfall.

[–] Miaou@jlai.lu 3 points 6 months ago

No one knows why any of those DNNs work, that's not exactly new

[–] henfredemars@infosec.pub 9 points 6 months ago

AI need not be deceptive to be damaging. A human can simply instruct the AI to produce content and then supply the ill-will on its behalf.

[–] Sabata11792@kbin.social 3 points 6 months ago

She doesn't really love me, dose she?

[–] Endward23@futurology.today 2 points 6 months ago (1 children)

"But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI's training task. Deception helps them achieve their goals."

Sounds like something I would expect from an evolved system. If deception is the best way to win, it is not irrational for a system to choice this as a strategy.

In one study, AI organisms in a digital simulator "played dead" in order to trick a test built to eliminate AI systems that rapidly replicate.

Interesting. Can somebody tell me which case it is?

As far as I understand, Park et al. did some kind of metastudy as a overview of literatur.

[–] Endward23@futurology.today 3 points 6 months ago

"Indeed, we have already observed an AI system deceiving its evaluation. One study of simulated evolution measured the replication rate of AI agents in a test environment, and eliminated any AI variants that reproduced too quickly.10 Rather than learning to reproduce slowly as the experimenter intended, the AI agents learned to play dead: to reproduce quickly when they were not under observation and slowly when they were being evaluated." Source: AI deception: A survey of examples, risks, and potential solutions, Patterns (2024). DOI: 10.1016/j.patter.2024.100988

As it appears, it refered to: Lehman J, Clune J, Misevic D, Adami C, Altenberg L, et al. The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities. Artif Life. 2020 Spring;26(2):274-306. doi: 10.1162/artl_a_00319. Epub 2020 Apr 9. PMID: 32271631.

Very interesting.

[–] notfromhere@lemmy.ml 2 points 6 months ago (1 children)

We need AI systems that do exactly as they are told. A Terminator or Matrix situation will likely only arise from making AI systems that refuse to do ad they are told. Once the systems are built out and do as they are told, they are essentially a tool like a hammer or a gun, and any malicious thing done is done by a human and existing laws apply. We don’t need to complicate this.

[–] Bipta@kbin.social 8 points 6 months ago (1 children)

Once the systems are built out and do as they are told, they are essentially a tool like a hammer or a gun, and any malicious thing done is done by a human and existing laws apply. We don’t need to complicate this.

This is so wildly naive. You grossly underestimate the difficulty of this and seemingly have no concept of the challenges of artificial intelligence.

[–] notfromhere@lemmy.ml 2 points 6 months ago (2 children)

That’s just like, your opinion, man.

[–] Bipta@kbin.social 3 points 6 months ago (1 children)

Once we build a warp drive it will be easy to use

Great. Build the warp drive.

[–] notfromhere@lemmy.ml 1 points 6 months ago

Considering we have AI systems being worked today and no advancements on warp drive, I think that comparison is done in bad faith. Nobody seems to want to talk about this other than slinging insults.

load more comments (1 replies)