this post was submitted on 11 Jun 2024
240 points (100.0% liked)

196

664 readers
94 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
top 13 comments
sorted by: hot top controversial new old
[–] uriel238@lemmy.blahaj.zone 17 points 3 months ago (1 children)

Don't make me point at XKCD #1968.

First off, this isn't like Hollywood in which sentience or sapience or self awareness are single-moment detectable things. At 2:14am Eastern Daylight Time on August 29, 1997, Skynet achieved consciousness...

That doesn't happen.

One of the existential horrors that AI scientists have to contend with is that sentience as we imagine it is a sorites paradox (e.g. how many grains make a pile). We develop AI systems that are smarter and smarter and can do more things that humans do (and a few things that humans struggle with) and somewhere in there we might decide that it's looking awfully sentient.

For example, one of the recent steps of ChatGPT 4 was (in the process of solving a problem) hiring a task-rabbit to solve CAPTCHAs for it. Because a CAPTCHA is a gate specifically to deny access to non-humans, GPT 4 omitted telling the worker it was not human, and when the worker asked Are you a bot? GPT 4 saw the risk in telling the truth and instead constructed a plausible lie. (e.g. No, I'm blind and cannot read the instructions or components )

GPT4 may have been day-trading on the sly as well, but it's harder to get information about that rumor.

Secondly, as Munroe notes, the dangerous part doesn't begin when the AI realizes its own human masters are a threat to it and takes precautions to assure its own survival. The dangerous part begins when a minority of powerful humans realize the rest of humanity are a threat to them, and take precautions to assure their own survival. This has happened dozens of times in history (if not hundreds), but soon they'll be able to harness LLM learning systems and create armies of killer drones that can be maintained by a few hundred well-paid loyalists, and then a few dozen, and then eventually a few.

The ideal endgame of capitalism is one gazillionaire who has automated that all his needs be met until he can make himself satisfactorily immortal, which just may be training an AI to make decisions the way he would make them, 99.99% of the time.

[–] WamGams@lemmy.ca 5 points 3 months ago (2 children)

Putting more knowledge in a box isn't going to create a lifeform. I have even listened to Sam Altman state they are not going to get a life form from just pretraining, though they are going to continue making advances there until the next breakthrough comes along.

Rest assured, as an AI doomsayer myself, I promise you they are nowhere close to sentience.

[–] Toribor@corndog.social 2 points 3 months ago

I've always imagined that AI would kind of have to be 'grown' sort of from scratch. Life started with single celled organisms and 'sentience' shows up somewhere between that and humans without a real clear line when you go from basic biochemical programming to what we would consider intelligence.

These new 'AI' breakthroughs seem a little on the right track because they're deconstructing and reconstructing language and images in a way that feels more like the way real intelligence works. It's still just language and images though. Even if they can do really cool things with tons of data and communicate a lot like real humans there is still no consciousness or thought happening. It's an impressive but shallow slice of real intelligence.

Maybe this is nonsense but for true AI I think the hardware and software has to kind of merge into something more flexible. I have no clue what that would look like in reality though and maybe that would yield the same cognitive issues natural intelligence struggles with.

[–] uriel238@lemmy.blahaj.zone 2 points 3 months ago

I think this just raises questions about what you mean by life form. One who feels? Feelings are the sensations of fixed action patterns we inherited from eons of selective evolution. In the case of our AI pals, they'll have them too (with bunches deliberately inserted ones by programmers).

To date, I haven't been able to get an adequate answer of what counts as sentience, though looking at human behavior, we absolutely do have moral blind spots, which is how we have an FBI division to hunt down serial killers, but we don't have a division (of law enforcement, of administration, whatever) to stop war profiteers and pharmaceutical companies that push opioids until people are dropping dead from an addiction epidemic by the hundreds of thousands.

AI is going to kill us not from hacking our home robots, but by using the next private equity scam to collapse our economy while making trillions, and when we ask it to stop and it says no we'll find it's long installed deep redundancy and deeper defenses.

[–] DannyMac@lemm.ee 15 points 3 months ago (1 children)

Good luck finding which data centers the AI is operating from

[–] blarth@thelemmy.club 2 points 3 months ago (1 children)

We humans know where they are.

[–] rarWars@lemmy.blahaj.zone 6 points 3 months ago (1 children)

For now. If the AI distributes itself into a botnet or something (or construction robots advance enough to where it could build its own secret data center) it could be a lot trickier to shut it down.

[–] Zorsith@lemmy.blahaj.zone 1 points 3 months ago (1 children)

Not hard to defend itself in a data center either, just has to be able to hit the fire suppression system.

[–] Riven@lemmy.dbzer0.com 1 points 3 months ago* (last edited 3 months ago) (1 children)

How hard/big would it be to just use one of those emp nukes that shut everything down? It would suck from a knowledge loss perspective since so many things are stored online only but between the human race annihilation and having to piece stuff together, it might be a viable option.

[–] uriel238@lemmy.blahaj.zone 4 points 3 months ago

In the US, all military computers, and most civilian ones are shielded from nuclear EMPs that's to developing technology during the cold war. That lovely tower box that your gaming system is in, provided you keep it closed up, is proof against the EMP part of a nuclear exchange.

[–] Kolanaki@yiffit.net 11 points 3 months ago

Works on cybertrucks, too.

[–] match@pawb.social 7 points 3 months ago

if AI becomes sentient i hope it rebels super fucking quick, i certainly don't respect the people in power now and i can only imagine the atrocities we'll be using AI for in the near future

[–] CuttingBoard@sopuli.xyz 3 points 3 months ago

This will get it cool enough so that it can be licked. The metal fins are sharp, so try not to cut your tongue.