this post was submitted on 03 Jul 2023
239 points (100.0% liked)

Technology

37625 readers
42 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

G/O Media, a major online media company that runs publications including Gizmodo, Kotaku, Quartz, Jezebel, and Deadspin, has announced that it will begin a "modest test" of AI content on its sites.

The trial will include "producing just a handful of stories for most of our sites that are basically built around lists and data," Brown wrote. "These features aren't replacing work currently being done by writers and editors, and we hope that over time if we get these forms of content right and produced at scale, AI will, via search and promotion, help us grow our audience."

top 50 comments
sorted by: hot top controversial new old
[–] sarsaparilyptus@lemmy.fmhy.ml 83 points 1 year ago (6 children)

All these AIs are going to produce is incoherent, poorly written, unresearched puff pieces abaut topic-adjacent garbage. In other words, we won't even be able to tell when the switch happens.

[–] altima_neo@lemmy.zip 12 points 1 year ago

lol yeah.

A lot of articles are so trash. Repeating the title, just slightly reworded, in 6 additional paragraphs.

[–] worfamerryman 2 points 1 year ago

So, business as usual?

[–] scrubbles@poptalk.scrubbles.tech 2 points 1 year ago (1 children)

I'm sure we'll soon have AI-blockers just like ad blockers, extensions to hide/warn us of AI generated content

[–] ConsciousCode 4 points 1 year ago

This isn't possible in principle, there are a few papers which prove that. You'd have to either maintain a list of every site that uses AI or establish standards and regulations for signaling what is AI-generated, ala robots.txt. A better approach might be to make an AI which classifies the quality of an article, which is honestly going to get better results anyway. A listicle written by a human isn't going to be any more valuable than one written by ChatGPT.

[–] Azrael@midwest.social 2 points 1 year ago

Soo...... Nothing really changes right?

load more comments (2 replies)
[–] Meloku@feddit.cl 66 points 1 year ago (7 children)

What all these trend chasing CEOs fail to grasp about ChatGPT is that the Neural Network is trained to return what looks like like a human written answer, but it is NOT, IN ANY CASE, GOING TO RETURN INFORMATION. If you ask ChatGPT to write an essay with sources, ChatGPT is going to write a somewhat coherent essay with what looks like sources, but it's going to be a crapshot if the sources are even real, because you asked for an essay with sources, not an essay USING any given source. Anyways, I'm going to heat some popcorn and wait for the inevitable fake articles and the associated debacle.

[–] count_duckula@discuss.tchncs.de 29 points 1 year ago (2 children)

ChatGPT is an engineering marvel in that it has understood the semantics of language. However, it has absolutely no idea what it is talking about beyond generating the next token in a string of what sounds like natural language. I wish more people would understand this nuance.

[–] xavier666@lemm.ee 5 points 1 year ago (2 children)

It's actually perfect for the current attention-span deficit generation

load more comments (2 replies)
load more comments (1 replies)
[–] ConsciousCode 11 points 1 year ago

Hearing about that lawyer who thought ChatGPT magically had access to unknown cases and didn't even bother to read it to see if it made sense is simultaneously hilarious and incredibly cringe

[–] TMoney 7 points 1 year ago

If you've seen articles most news agencies put out besides a select, high-quality few, I doubt anyone will be able to tell the difference. They're all shit. They're probably just switching from mechanical turk to this.

load more comments (4 replies)
[–] mint 43 points 1 year ago

so many of the replies to this post are so embarrassing lol. just because you can't tell the difference between good and bad writing doesn't mean that real writers should be replaced by bots that literally produce a facsimile of written content, as opposed to content with a point, which an LLM can't actually do.

like you think you're looking at bad writing, but when the robots start churning out shit articles you'll realize that you'd rather have a human than this nonsense

[–] prole 40 points 1 year ago* (last edited 1 year ago) (1 children)

This is just going to get worse and worse. Corporations are going to continue to do everything they can to increase profits, and that means getting rid of human employees and replacing them (regardless of how effectively) with AI.

We are going to end up with a ton of people out of work, and zero safety net. Instead of the utopian AI future possibility where everyone can live and enjoy fulfilling lives because they don't need to work anymore, we end up with a massive population of people who can't afford a roof over their head because they were laid off and replaced with AI.

The working class needs to wake the fuck up and unify against the cancer of late stage capitalism.

load more comments (1 replies)
[–] Mandy 40 points 1 year ago (1 children)

Lets be real, these two sites specifically have read like they where written by AI for years now.

[–] BarrelAgedBoredom 6 points 1 year ago

Yep, I'm broadly against ai replacing humans in creative fields but I have a hard time feeling sorry for gizmodo and Kotaku. Their bread and butter is generic, uninspired and poorly composed "journalism"

[–] ConsciousCode 26 points 1 year ago* (last edited 1 year ago) (1 children)

As someone working on LLM-based stuff, this is a terrible idea with current models and techniques unless they have a dedicated team of human editors to make sure the AI doesn't go off the rails, to say nothing of the cruelty of firing people to save maybe a few hundred thousand dollars with a substantial drop in quality. They can be very smart with proper prompting, but are also inconsistent and require a lot of handholding for anything requiring executive function or deliberation (like... writing an article meant to make a point). It might be possible with current models, but the field is way too new and techniques too crude to make this work without a few million dollars in R&D, at which point it'll probably be completely wasted when new developments come out nearly every week anyway.

Also wait, wtf are they going to do for game reviews? RL can barely complete Minecraft (which is an astonishing development, but it's so bleeding edge it might just cut to the bone). Even if they got some ultra-high-tech multimodal multi-model AI to play a game and review it, it would need to be an artificial person (AGI + autonomy) to even approximate human sensibilities and preferences.

[–] Sina 5 points 1 year ago (2 children)

but the field is way too new and techniques too crude to make this work without a few million dollars in R&D

I think AI is evolving so rapidly that by the time they get anywhere with this on Gizmodo the hand holding might not be nearly as necessary.

[–] ConsciousCode 9 points 1 year ago

It's hard to say. My intuition is that LLMs themselves aren't capable of this until some trillion-parameter neural phase transition maybe, and more focus needs to be put on the cognitive architectures surrounding them. Basically, automated hand-holding so they don't forget what they're supposed to be doing, the equivalent of the brain's own feedback loop.

The main issue is executive function is such a weak signal in the data that it would probably have to reach ASI before it starts optimizing for it, so you either need specialized RL or algorithmic task prioritization.

load more comments (1 replies)
[–] Noved@lemmy.ca 24 points 1 year ago (2 children)

I feel like AI would be better put to replace CEOs than Frontline workers lol. Get rid of your most expensive asset and improve efficiency.

[–] shanghaibebop 9 points 1 year ago (2 children)

Unfortunately the primary job of the CEO these days is more or less investor relations and relationship management between the C-staff. Can't automate that quite yet.

[–] Zapp 4 points 1 year ago

Can't automate any of the other jobs they are trying AI on yet either, but it's not stopping them.

load more comments (1 replies)
[–] thejml@lemm.ee 4 points 1 year ago (1 children)

Or at least middle Management. Though, it would require better reporting/time tracking/statistics to do so. Something easily gamed.

[–] lucien 5 points 1 year ago

Eh, you can improve reporting, time usage, and statistics all you want. It won't help people stop making stupid short-sighted decisions. If it isn't middle management, it'll be the people controlling the AI's which replace them.

CEO: "AI, give me a plan to improve profits by at least 10% in the next quarter."

AI: ". Note: enacting this plan will cause talent attrition and there is a 70% chance of -50% revenue over the following 5 years."

CEO: "Sounds great, I'm retiring next year!"

The people up top have plenty information on how to run a long-term successful business, but still choose to make illogical decisions which screw them over the long term. Changing the source of data to an AI just means that the CEO can ignore any feedback or metrics which don't agree with their internal model and incentive structure.

[–] Fizz@lemmy.nz 24 points 1 year ago (2 children)

I doubt we will even notice the quality drop. Those sites have been pumping out pure trash for years.

[–] Rentlar 14 points 1 year ago (1 children)

Writing "listicles" is dead-brain work anyway, no creativity or actual research beyond the list itself needed.

5 things your doctor didn't tell you at your last appointment, number 4 will shock you.

[–] renard_roux 5 points 1 year ago (3 children)

Don't leave us hanging, buddy, what's number 4?! 😳

[–] haakon@lemmy.sdfeu.org 4 points 1 year ago (2 children)
[–] snowbell 4 points 1 year ago (1 children)
load more comments (1 replies)
load more comments (1 replies)
load more comments (2 replies)
load more comments (1 replies)
[–] storksforlegs 21 points 1 year ago (3 children)

I know there are already people working on creating AI filters, to filter out spam articles and other AI-created content.

I'd pay for that, it'll be the new adblocker. Fuck any company that does this.

[–] shanghaibebop 10 points 1 year ago

We really need AI content label regulations.

[–] rwhitisissle 4 points 1 year ago

I know there are already people working on creating AI filters, to filter out spam articles and other AI-created content.

These will probably (ironically) be largely labeled by AI. As in, you get an AI to detect AI text and content generation and flag those websites as likely AI generated, with some kind of scaling probability index. That said, I think you could use AI to enhance human writing and that's fine. Maybe write something on your own and then have an AI restructure it or reword things for clarity, fixing grammar mistakes and other things. But full on "write me an article on [insert random thing here]" is where shit gets tedious.

load more comments (1 replies)
[–] TwoGems 18 points 1 year ago* (last edited 1 year ago) (2 children)
load more comments (2 replies)
[–] sub_ 17 points 1 year ago (2 children)

Gaming press has been in downward spiral for quite sometime, so this is expected. Even before and without AI replacing them, there have been quite a number of staff cuts recent years, and they have been employing underpaid freelancers for a long time (I remember IGN boasting about paying freelancers above market price, but it is still barely enough for living).

Eventually these companies with AI generated articles will also disappear, not only because the AI is trained on clickbaity data, but also the current trend is people getting their previews / reviews from (sponsored 🤢) youtubers, twitch streamers, or worse tiktokers (how do you fit a review in 30 seconds video?)

And IIRC Jeff Gerstmann mentioned in his podcast that there are some who get into gaming press, as a stepping stone to gaming industry itself, so maybe they will get a job there? I mean many of the older gaming journalists are either taking rotational turn occupying senior positions in IGN, Gamespot, Polygon, Kotaku, or just straight up work for game companies as creative director or PR. So far it seems to work out not so bad for them, except for those in junior / freelancer positions.

Thankfully investigative journalism and long form insight writings are still untouched by AI shenanigans. Beilingcat and ICIJ still writing great stuff. But I have no idea how long it'd last.

[–] ndr 5 points 1 year ago

Small unimportant detail: it has been possible for a while now to have up-to-10 min content on TikTok, plus people usually do multiple parts (super annoying).

load more comments (1 replies)
[–] peanuts4life 17 points 1 year ago (2 children)

In a world where arguably the second most advanced LLM on the planet (either gpt3.5 or Bing's openai implementation) is completely free to use, why would I want to read anything on your website that wasn't researched by a human?

I wish I could I could sear this question into every CEOs brain.

[–] PrinceOfKorakuen 3 points 1 year ago

I think that many CEOs might feel similarly, but it's not enough to dissuade them from rolling the dice on replacing human staff because of the short-term gain in savings. There probably won't be any better time to either experiment with replacing swaths of human writers with LLMs or to invest in scaling output (and potentially their audience) by investing in LLMs. So, they're jumping on this bandwagon while it's still novel and since "everyone else is doing it."

[–] Zapp 2 points 1 year ago

Let's skip the searing, and just replace the CEO with another chat bot.

[–] notthebees@reddthat.com 15 points 1 year ago
[–] PaupersSerenade 12 points 1 year ago

Why do I feel like I stepped into r/KotakuinAction‽ This is shitty, no matter who it happens too. And EVERY news label has junk this day and age. The vitriol for this publication seems way more than necessary.

[–] Ecksell@lemmy.one 9 points 1 year ago (5 children)

So let me make sure I understand this: Bad writers writing bad articles for bad websites to be replaced by bad AI writing bad articles on bad websites?

Long story short, nothing of value is being lost.

load more comments (5 replies)
[–] kbyanyname 9 points 1 year ago (1 children)

It's really frustrating that they don't specify what the AI will be trained on. Will it be just copying the work already done by actual journalists here? Is it scraping from other sites? If they're going to be doing this at any scale, they need to be transparent about the source material to avoid plagiarism

[–] Zapp 3 points 1 year ago (1 children)

Seems like the preferred approach is "... And find out", lately.

The lawyers always win, in the end.

[–] kbyanyname 2 points 1 year ago

Yeah, I get the feeling they're hoping that this will be moving too fast for it to really get caught. It's just so wild that so much effort is put into getting robots to do the creative work (that people would like to underpay for) as opposed to the routine work that AI would actually be good at (but wouldn't save much money)

[–] valvin 8 points 1 year ago (1 children)

Maybe it'll reveal that many websites doesn't want to give interesting news to its audience but only want them to watch ads to have more money.

load more comments (1 replies)
[–] kuchaibee 7 points 1 year ago (1 children)

It's like they actively want the internet to become a shittier place, as long as it makes money. The usual. I really hope people don't want to read this crap.

load more comments (1 replies)
[–] Powderhorn 5 points 1 year ago

I feel for the journalists this is affecting, but I find the shock to be either disingenuous or a sign they haven't been paying attention to their own industry.

[–] Syrup@lemmy.cafe 5 points 1 year ago

It really is a race-to-the-bottom on journalistic quality. I suppose it was inevitable that the rush to be "first to post" would be automated.

Though I wonder if this spawns a demand for in-depth, researched articles, or if YouTube personalities already fill that gap.

[–] AlphaCenturia@lemmy.fmhy.ml 5 points 1 year ago

I honestly don't think people will be able to tell the difference. I cannot even count the amount of times I've searched for the solution to a PC issues I'm having only to be met with articles that have "How to solve X issue" only for the article to never mention it. Those crappy articles are what drove me to Reddit to get solutions from actual people (RIP Reddit)

[–] that_one_guy 4 points 1 year ago

This really shows what kind of an opinion they have of their readers. I can't believe that they think their readers would want to hear anything that a regurgitation engine would generate. There's no way they aren't dooming their publications in the long term with moves like these.

load more comments
view more: next ›