this post was submitted on 17 Jul 2024
357 points (100.0% liked)

PC Gaming

227 readers
3 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 1 year ago
MODERATORS
top 29 comments
sorted by: hot top controversial new old
[–] rainynight65@feddit.de 25 points 2 months ago (1 children)

I am generally unwilling to pay extra for features I don't need and didn't ask for.

[–] Appoxo@lemmy.dbzer0.com 5 points 2 months ago

raytracing is something I'd pay for even if unasked, assuming they meaningfully impact the quality and dont demand outlandish prices.
And they'd need to put it in unasked and cooperate with devs else it won't catch on quickly enough.
Remember Nvidia Ansel?

[–] cygnus@lemmy.ca 20 points 2 months ago (1 children)

The biggest surprise here is that as many as 16% are willing to pay more...

[–] ShinkanTrain@lemmy.ml 4 points 2 months ago* (last edited 2 months ago)

I mean, if framegen and supersampling solutions become so good on those chips that regular versions can't compare I guess I would get the AI version. I wouldn't pay extra compared to current pricing though

[–] crazyminner@lemmy.ml 19 points 2 months ago (1 children)

I was recently looking for a new laptop and I actively avoided laptops with AI features.

[–] lamabop@lemmings.world 15 points 2 months ago

Look, me too, but, the average punter on the street just looks at AI new features and goes OK sure give it to me. Tell them about the dodgy shit that goes with AI and you'll probably get a shrug at most

[–] alessandro@lemmy.ca 17 points 2 months ago

I don't think the poll question was well made... "would you like part away from your money for..." vaguely shakes hand in air "...ai?"

People is already paying for "ai" even before chatGPT came out to popularize things: DLSS

[–] UltraGiGaGigantic@lemm.ee 17 points 2 months ago (1 children)

We're not gonna make it, are we? People, I mean.

[–] alessandro@lemmy.ca 3 points 2 months ago* (last edited 2 months ago) (1 children)

Didn't John Connor befriend the second IA he find?

[–] Marin_Rider@aussie.zone 8 points 2 months ago (2 children)

yeah but it didn't try to lock him into a subscription plan or software ecosystem

[–] Appoxo@lemmy.dbzer0.com 2 points 2 months ago* (last edited 2 months ago)

It locked him into the world of the terminators? Imo a mighty subscription

/j

[–] alessandro@lemmy.ca 1 points 2 months ago

yeah but it didn’t try to lock him into a subscription plan or software ecosystem

Not AI fault, the first one (killed) was a remotely controlled by the product of a big corp (Skynet), the other one was a local, offline one.

Moral of the story: there's difference between the AI that runs locally on your GPU and the one that runs on Elon's remote servers... and that difference may be life or death.

[–] AVincentInSpace@pawb.social 10 points 2 months ago

I'm willing to pay extra for software that isn't

[–] ArchRecord@lemm.ee 10 points 2 months ago* (last edited 2 months ago) (1 children)

And when traditional AI programs can be run on much lower end hardware with the same speed and quality, those chips will have no use. (Spoiler alert, it's happening right now.)

Corporations, for some reason, can't fathom why people wouldn't want to pay hundreds of dollars more just for a chip that can run AI models they won't need most of the time.

If I want to use an AI model, I will, but if you keep developing shitty features that nobody wants using it, just because "AI = new & innovative," then I have no incentive to use it. LLMs are useful to me sometimes, but an LLM that tries to summarize the activity on my computer isn't that useful to me, so I'm not going to pay extra for a chip that I won't even use for that purpose.

[–] Natanael@slrpnk.net 4 points 2 months ago (2 children)
[–] OfficerBribe@lemm.ee 3 points 2 months ago (1 children)
[–] Natanael@slrpnk.net 1 points 2 months ago

That still needs an FPGA. While they certainly seems to be able to use smaller ones, adding an FPGA chip will still add cost

[–] ArchRecord@lemm.ee 1 points 2 months ago

Whoops, no clue how that happened, fixed!

[–] chicken@lemmy.dbzer0.com 9 points 2 months ago

I can't tell how good any of this stuff is because none of the language they're using to describe performance makes sense in comparison with running AI models on a GPU. How big a model can this stuff run, how does it compare to the graphics cards people use for AI now?

[–] JokeDeity@lemm.ee 7 points 2 months ago (1 children)

The other 26% were bots answering.

[–] Buelldozer@lemmy.today 6 points 2 months ago* (last edited 2 months ago)

I'm fine with NPUs / TPUs (AI-enhancing hardware) being included with systems because it's useful for more than just OS shenanigans and commercial generative AI. Do I want Microsoft CoPilot Recall running on that hardware? No.

However I've bought TPUs for things like Frigate servers and various ML projects. For gamers there's some really cool use cases out there for using local LLMs to generate NPC responses in RPGs. For "Smart Home" enthusiasts things like Home Assistant will be rolling out support for local LLMs later this year to make voice commands more context aware.

So do I want that hardware in there so I can use it MYSELF for other things? Yes, yes I do. You probably will eventually too.

The other 16% do not know what AI is or try to sell it. A combination of both is possible. And likely.

[–] Poutinetown@lemmy.ca 3 points 2 months ago

Tbh this is probably for things like DLSS, captions, etc. Not necessarily for chatbots or generative art.

[–] CaptKoala@lemmy.ml 1 points 2 months ago

Predictable outcome, common tech company L.

[–] BlackLaZoR@kbin.run 0 points 2 months ago (1 children)

Unless you're doing music or graphics design there's no usecase. And if you do, you probably have high end GPU anyway

[–] DarkThoughts@fedia.io 0 points 2 months ago (1 children)

I could see use for local text gen, but that apparently eats quite a bit more than what desktop PCs could offer if you want to have some actually good results & speed. Generally though, I'd rather want separate extension cards for this. Making it part of other processors is just going to increase their price, even for those who have no use for it.

[–] BlackLaZoR@kbin.run 0 points 2 months ago (1 children)

There are local models for text gen - not as good as chatGPT but at the same time they're uncensored - so it may or may not be useful

[–] DarkThoughts@fedia.io 1 points 2 months ago

Yes, I know - that's my point. But you need the necessary hardware to run those models in a performative way. Waiting a minute to produce some vaguely relevant gibberish is not going to be of much use. You could also use generative text for other applications, such as video game NPCs, especially all those otherwise useless drones you see in a lot of open world titles could gain a lot of depth.