this post was submitted on 22 Aug 2023
107 points (100.0% liked)

Technology

1083 readers
6 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 33 comments
sorted by: hot top controversial new old
[–] crussel@lemmy.blahaj.zone 53 points 1 year ago

Come on now, next you’ll be saying the tech industry consistently overplays its incremental improvements as Earth-shattering paradigm shifts purely for the investment money!

This message posted from the metaverse

[–] bh11235@infosec.pub 46 points 1 year ago (2 children)

Reading this comment section is so strange. Skepticism about generative AI seems to have become some kind of professional sport on the internet.

Consensus in our group is that generative AI is a great tool. Maybe not perfect, but the comparison to the metaverse is absurd: no one asked for the metaverse or needed it for anything, as opposed to several cases where GPT has literally bailed us out of a difficult situation. e.g. some proof of concept needed to be written in a programming language that no one in the group had enough experience with. With no GPT, this could have easily cost someone a week. With GPT assistance -- proof of concept ready in less than a day.

Generative AI does suffer from a host of problems. Hallucinations, jailbreaks, injections, reality 101 failures, believe me I've encountered all these intimately as I've had to utilize GPT for some of my day job tasks, often against its own better judgment and despite its own woefully lacking capacity to deal with the task. What I think is interesting is a candid discussion: why do these issues persist? What have we tried? What techniques can we try next? Are these issues intractable in some profound sense, and constitute a hard ceiling for where generative AI can go? Is there an "impossibility theorem for putting AI on autopilot"? Or are these limitations just artifacts we can engineer away and route around?

It seems like instead of having this discussion, it's become in vogue to wave around the issues triumphantly and implicitly declare the field successfully dunked on, and the discussion over. That's, to be blunt, reductive. Smartphones had issues, the early internet had issues. Sure, "they also laughed at Bozo the clown" and all that, but without a serious discussion of the landscape right now, of how far away we are from mitigating these issues and why, a lot of this "ha ha suck it AI" discourse strikes me as deeply performative. Like, suppose a year from now OpenAI solves hallucinations. The issue is just gone. Do all the cool kids who sneered at the invented legal precedents, crafted their image as knowing better than the OpenAI dweebs, elegantly implied how hallucinations are a cornerstone in how the entire field is a stupid useless dead end -- do they lose any face? I think they don't. I think this is why this sneering has become such a lucrative online professional sport.

[–] floofloof@lemmy.ca 12 points 1 year ago* (last edited 1 year ago)

Some of the skepticism is just a reaction to the excessive hype with which generative AI has been pushed over the past few months. If you've seen tech hype cycles before, the hype itself can generate some skepticism. Plus there are many dubious cases where companies are shoving ChatGPT or similar into their products just so they can advertise them as "AI powered", and these poorly thought out, marketing-driven moves deserve criticism.

[–] Anticorp@lemmy.ml 8 points 1 year ago

It's amazing how critical Lemmy is of ChatGPT. It has become fashionable to pretend it's a trash technology. The reality is that it is and will continue changing the world.

[–] Moobythegoldensock@lemm.ee 26 points 1 year ago

3 months ago: Everyone’s going to lose their jobs!

Today: Generative AI’s dead!

More realistically: Generative AI is a tool that will gradually get better over time. It is not universally applicable, but it does have a lot of potential applications. It is not going to take over the world, nor will it just suddenly go away.

[–] birdcat@lemmy.ml 25 points 1 year ago (3 children)

"If hallucinations aren't fixable, generative AI probably isn't going to make a trillion dollars a year," he said. "And if it probably isn't going to make a trillion dollars a year, it probably isn't going to have the impact people seem to be expecting," he continued. "And if it isn't going to have that impact, maybe we should not be building our world around the premise that it is."

Well he sure proves one does not need an AI to hallucinate...

[–] ReallyKinda@kbin.social 13 points 1 year ago (1 children)

Clearly nothing can change the status quo if it doesn’t also make trillions

[–] birdcat@lemmy.ml 5 points 1 year ago* (last edited 1 year ago)

The assertion that our Earth orbits the sun is as audacious as it is perplexing. We face not one, but a myriad of profound, unresolved questions with this idea. From its inability to explain the simplest of earthly phenomena, to the challenges it presents to our longstanding scientific findings, this theory is riddled with cracks!

And, let us be clear, mere optimism for this 'new knowledge' does not guarantee its truth or utility. With the heliocentric model, we risk destabilizing not just the Church's teachings, but also the broader societal fabric that relies on a stable cosmological understanding.

This new theory probably isn't going to bring in a trillion coins a year. And if it probably isn’t going to make a trillion coins a year, it probably isn’t going to have the impact people seem to be expecting. And if it isn’t going to have that impact, maybe we should not be building our world around the premise that it is.

[–] southernwolf@pawb.social 4 points 1 year ago* (last edited 1 year ago)

Imagine if someone had said something like this about the 1st generation iPhone... Oh wait, that did happen and his name was Steve Ballmer.

[–] Pelicanen@sopuli.xyz 2 points 1 year ago* (last edited 1 year ago)

maybe we should not be building our world around the premise that it is

I feel like this is a really important bit. If LLMs turn out to have unsolvable issues that limit the scope of their application, that's fine, every technology has that, but we need to be aware of that. A fallible machine learning model is not dangerous; AI-based grading, plagiarism checking, resume-filtering, coding, etc. without skepticism is dangerous.

LLMs probably have very good applications that could not be automated in the past but we should be very careful of what we assume those things to be.

[–] uriel238@lemmy.blahaj.zone 20 points 1 year ago (1 children)

In the early 1980s, a teacher refused to let me word-process my homework (my penmanship was shit) on the grounds that I shouldn't be able to produce a paper at the touch of a button.

Upper management look at AI end results and imagine a similar scenario: they don't see the human effort behind the dumb-waiter and imagine a clerk can just tell an LLM to make me a sequel to Dumbo without getting very specific and then having a team of reviewers watch hundreds of terrible elephant films to curate the few good ones.

But what is telling is how our corporate bosses responded to the prospect of automated art. Much like the robot pizza company who did not automate the process and pass the savings on to you! (his offerings were typical pizza at typical prices and he kept all the savings for himself) our senior execs imagine ways to replace workers with cheaper automation rather than producing better stuff or cheaper movie tickets for their customers.

So maybe we should growl at them and change the system before they figure out how to actually pay fewer people while keeping more profits.

[–] Anticorp@lemmy.ml 5 points 1 year ago (1 children)

Companies will always keep all the savings and pass on all the expenses. That's just how they operate. You're not going to be able to change that system short of a revolution.

[–] Strawberry@lemmy.blahaj.zone 10 points 1 year ago

That's what change the system means

[–] Naich@kbin.social 14 points 1 year ago (1 children)

I can't believe this tech bubble will burst. All the other ones have fared so well.

[–] ZILtoid1991@kbin.social 6 points 1 year ago* (last edited 1 year ago)

Because they were far more useful to the average person, than the glorified spam making machine. Also it's not like something like this happened for the first time...

EDIT: forgot to grammar

[–] ReallyKinda@kbin.social 11 points 1 year ago

AI doesn’t seem to do well when it trains on its own data so I do think there’s a possibility it’s a one trick pony. Once there’s too much AI content in the data it’s trained on it will devolve into nonsense.

[–] MargotRobbie@lemm.ee 10 points 1 year ago

Ultimately, generative AI are tools, not magic. We're now past the hype phase and are now at the leveling out phase of the S-curve as people realizes that these things are limited.

I think ChatGPT is mostly going to be used as an automated copywriter for emails and resumes and such, whereas diffusion models will find their way into digital artists' workflow.

Life goes on.

[–] hottari@lemmy.ml 10 points 1 year ago (1 children)

Isn't ChatGPT's launch only less than 6 months old or something...

[–] Peanutbjelly@sopuli.xyz 3 points 1 year ago (1 children)

Reminds me of the article saying open ai is doomed because it can only last about thirty years with its current level of expenditure.

[–] hottari@lemmy.ml 1 points 1 year ago

OpenAI must evolve into serving something other than generative AI.

The compute bills for OpenAI are crazy. They would need more paying customers to try and at least keep the service somewhat viable.

https://futurism.com/the-byte/chatgpt-costs-openai-every-day

[–] amju_wolf@pawb.social 9 points 1 year ago

Kinda is, sure. The problem is when you become overly reliant on the tech without it being reliable. It's also kinda bad when it causes you to lose skills that you need to maintain it or further it.

[–] PetePie@kbin.social 6 points 1 year ago (6 children)

I’m curious about the development of artificial intelligence in the future, and I’m looking forward to seeing what GPT-5 can do. If it’s a huge leap forward, then I will agree that the future will be very different from what we have now. But if it’s only a slight improvement, like Llama 1 vs Llama 2, then large language models (LLMs) might face the same challenges as self-driving cars. They are somewhat functional, but not reliable enough to let you sleep on your commute, and they won’t be for a long time.
It might be impossible to eliminate all the hallucinations from LLMs, but if the next versions are incredibly useful, then we will learn to live with them. For example, currently 30% of chips fail on a wafer, but we still produce more CPUs and they are groundbreaking technology. But even GPT4+ will have a significant impact on our future, especially in education. Every kid will have an AI in their phone that is ready to answer all their questions with minimal effort. This will greatly enhance the intelligence of future generations and make education accessible to almost everyone on earth at a similar high level. But this will not make us all lose our jobs in 10 years.

[–] duringoverflow@kbin.social 3 points 1 year ago

This will greatly enhance the intelligence of future generations and make education accessible to almost everyone on earth at a similar high level.

I don't think that accessibility in AI somehow correlates with the intelligence of the subjects using it. It can actually work in the completely opposite way where people blindly trust it or people get used to using it in a degree that they're unable to do anything without the help from the technology. Like people who are unable to navigate 2 blocks from their house if they don't use google maps navigation even though they do the same route every day.

[–] GunnarRunnar@kbin.social 1 points 1 year ago (1 children)

This will greatly enhance the intelligence of future generations and make education accessible to almost everyone on earth at a similar high level.

You mean at the current ChatGPT level? Because I'm unsure if the future versions will be open source or open access, if not surely it will just raise the disparity in education.

[–] 520@kbin.social 4 points 1 year ago

Lol. OpenAI haven't made GPT open source since version 2. With that said, their best interests are currently in keeping access public and their name in the headlines. They need an income source.

load more comments (4 replies)
[–] FatTony@discuss.online 2 points 1 year ago (2 children)

Genuine question: How hard is it to fix A.I. Hallucinations?

[–] ollien 13 points 1 year ago

I'm no expert, so take what I'm about to say with a grain of salt.

Fundamentally, a LLM is just a fancy autocomplete; there's no source of knowledge it's tapping into, it's just guessing words (though it is quite good at it). Correspondingly, even if it did have a pool of knowledge, even that can't be perfect, because the truth is never quite so black and white in many areas.

In other words, hard.

[–] skullgiver@popplesburger.hilciferous.nl 2 points 1 year ago* (last edited 1 year ago)

[This comment has been deleted by an automated system]

[–] AI_toothbrush@lemmy.zip 2 points 1 year ago

Oohh i really love when im listening to music and click on an article and it starts autoplaying it -_-