One thing that bothers me about high level devs just leaving because they realized what they created. Is that them leaving means one more possible road block is just gone. They will just be replaced with people that are more fresh faced and on the hype train of going harder and harder. Lots of folks I know that are finishing college are just leaning more and more into just using all of these AI to solve problems instead of learning to code (or just write things) themselves. Some are still trying and I support them in my little ways, but I can see how much like a drug things start small and can turn into just using it all the time. Comp Sci majors were already getting worse in their actual understanding of how things work before LLMs (just look at all the things that will never be optimized and just rely on higher spec PCs).
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
What's funny to me is that such a robot takeover would mean all humans are (wage-)enslaved rather than 99% of us like right now
We already have a ruler, the Money god, that is already enslaving many, killing others, and silencing dissent. I might actually prefer if my ruler was some superintelligent logical being rather a than few male 60yos hoping to book the next trip to some harem island that might or might not have minors in it taken directly from the territories at war around the world
It might but I wouldn't hold my breath.
I'd be more concerned about what I call "the dumbot apocalypse"
Which is to say AI does accelerate the collapse of society. But it's not because We Created God Only For He To Turn On Us (tm) -- It's because some politician, drunk on hype fed to him by venture capitalists and techbros, puts an AI (and I mean these current AIs) in charge of something very important that by no means should be controlled by an AI, even if that AI WERE human level intelligence, and what we call AI right now is not even close -- And then the inevitable ChatGPT Hallucination (tm) takes place and the bot decides that a war with China is the only way to increase corporate profits for the next quarter or whatever. Humanity nukes itself, and maybe the humans pressing the button don't even realise their orders come from an LLM.
....................... And then the machines immediately shut down, because even these pretend, toy AIs we have right now are straining the global power grids, so the micro-instant electricity production slips, they'll drop like flies (Roko's Basilisk MFs when a minor brownout takes out their 'god')
This is kind of the less powerful variant of the paperclip problem.
Anytime I worry about the robot uprising, I just remember the time Google Location couldn't figure out what method of transportation I was using for two and a half hours between the Santa Ana, CA airport and the Denver, CO airport.
You teleported very slowly, didn't you?
Current LLMs are just that, large language models. They're incredible at predicting the next word, but literally cannot perform tasks outside of that, like fact checking, playing chess, etc. The theoretical AI that "could take over the world" like Skynet is called "Artificial Generalized Intelligence". We're nowhere close yet, do not believe OpenAI when they claim otherwise. This means the highest risk currently is a human person deciding to put an LLM "in charge" of an important task, that could cost lives if a mistake is made.
Just as an aside and in addition to the other comments here:
There is a phenomenon called regulatory capture. It can take many different forms but the short version is that agencies and policies get perverted to only benefit one group. When the intention should be society at large.
There is a process where the big players, say OpenAI, call for regulation of their industry, not because they feel it needs regulating but because the regulatory hurdles will keep competitors at bay. Meta pulled a stunt like that as well with social networks. So big hype company calling for regulation in their field is a red flag, accompanied by a loud alarm bell.
Thanks, I needed that red pill.
No. AI and robots don't care about anything. They don't care about taking over. Whoever controls them though, now we're talking. And that's much worse
I recently read a neat little book called "Rethinking Consciousness" by SA Graziano. It has nothing to do with AI, but is an attempt to describe the way our myriad neural systems come together to produce our experience, how that might differ between animals with various types of brains, and how our experience might change if some systems aren't present. It sounds obvious, but the simpler the brain, the simpler the experience. For example, organisms like frogs probably don't experience fear. Both frogs and humans have a set of survival instincts that help us detect movement, classify it as either threat or food or whatever, and immediately respond, but the emotional part of your brain that makes your stomach plummet just doesn't exist in them.
Humans automatically respond to a perceived threat in the same way a frog does--in fact, according to the book, the structures in our brains that dictate our initial actions in those instinctive moments are remarkably similar. You know how your eyes will automatically shift to follow a movement you see in the corner of your vision? A frog responds in much the same way. It's not something you have to think about--often your eye will have darted over to the point of interest even before you realize you've noticed something. But your experience of that reaction is also much richer than it is possible for a frog's to be, because we have far more layers of systems that all interact to produce what we call consciousness. We have a much deeper level of thought that goes into deciding whether that movement was actually important to us.
It's possible for us to continue to live even if we lose some parts of the brain--our personalities will change, our memory may get worse, or we may even lose things like our internal monologue, but we still manage to persist as conscious beings until our brains lose a large number of the overlying systems, or some very critical systems. Like the one that regulates breathing--though even that single function is somewhat shared between multiple systems, allowing you to breathe manually (have fun with that).
All that to say the things we're currently calling AI just don't have that complexity. At best, these generative models could fill out a fraction of the layers that would be useful for a conscious mind. We have developed very powerful language processing systems, at least in terms of averaging out a vast quantity of data. Very powerful image processing. Audio processing. What we don't have--what, near as I can tell, we haven't made any meaningful progress on at all--is a system to coalesce all these processing systems into a whole. These systems always rely on a human to tell them what to process, for how long, and ultimately to check whether the result of a process is reasonable. Being able to process all of those types of input simultaneously, choosing which ones to focus on in the moment, and continuously choosing an appropriate response? Barely even a pipe dream. And even all of that would be distinct from a system to form anything like conscious thought.
Right now, when marketing departments say "AI," what they're describing is like that automatic response to movement. Movement detected, eye focuses. Input goes in, output comes out. It's one small piece of the whole that's required when science fiction writers say "AI."
TL;DR no, the current generative model race is just tech stock market hype. The absolute best it can hope for is to reproduce a small piece of the conscious mind. It might be able to approximate the processing we're capable of more quickly, but at a massively inflated energy expenditure, not to mention the research costs. And in the end it still needs a human double checking its work. We will need to develop a vast number of other increasingly complex systems before we even begin to approach a true AI.
Itβs not complex enough to have intentions of any kind, so the only danger is that people will do incredibly stupid things with it.
Imagine duct-taping a sharp knife to a Roomba. The Roomba has no concept of what ankles or stabbing even are. It will roll around the floor as it always does, devoid of either malice or compassion, and any ankle-stabbing that ensues can only really be described as your fault.