Yes, but it could have been handled better. If ai was the problem they could have gone the route of api only being allowed after an application process so they know who is using it and everyone else trying to use it would get denied until they were assigned a key
Technology
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
100% and they also didn’t need to be total tools about it. giving a month window is a joke, being snarky assholes answering AMAs, telling their user base that profitability is the only thing that matters to them.
Surprising nobody, Reddit continues to make really awful business decisions. This is just another nail in their coffin.
This right here. They could have made a licensing agreement that is based on classification your use falls into. Apps has one pricing model, llm has another. This is just lazy and greedy.
I'm thinking, that they want to sell the generated data to AI companies as training data - and AI generated content would nullify that
edit: and obviously currently everyone can suck their data for free - although I don't know how that should be different with their changes, if I just use a web scraper
Reddit data is public and can be easily web scraped. Reddit doesn't own it. Spez is just throwing random memes in to distract people.
I am sorry but you don't know what you are talking about. These things are regulated by legal documents, you don't just wake up on morning and say "trust me bro, their data is public"
If you go and read their TnC's it explicitly statea that scraping is forbidden without prioir written consent. They only allow access to their data via APIs, which of course they charge for
The fact that it can be easily scraped it's neither here nor there, if they catch you they can sue you
Nah Terms of Service is not enforcable through browse wrap agreement in the US and most of EU. You can't implicitly agree with a legal document just by looking at something.
Check out LinkedIn v. Hiq case which went to 9th circuit and set the precedent for this. LinkedIn lost.
99% of LLMs have pirated content and will continue to regurgitate pirated content until there is enough money at stake for a big lawsuit.
Getty is already suing the Dall-E creators, and someone is suing MS for Copilot; so it's already started
Again, big money users will get sued, everyone else will scrape with impunity.
Charging for their api is reasonable in answer to the llm data scrapers. The amount they're chsrging, and the speed of the changes is not reasonable however IMO.
The original announcement said they were making exceptions for applications that gave back to Reddit. I and many others hoped that was basically everyone who wasn't AI scraping. But seems like they got greedy while they were at it and decided to kill everything
Could they have something to do with it? Yes, for sure. But the thing is that they didn't have to do any of this the way they did. They could have made an API plan that allowed third party apps to still exist/thrive, and also charge big companies that just want to use reddit to train LLM's. Change the pricing/terms based around this idea. They deliberately went after third party apps, and then double and tripled down on it in the face of massive backlash. If spez was competent, he would have been able to better pivot this conversation and make it about training LLM's for megacorps, but he didn't and even then it would have still been bullshit that is easily seen past.
Yup. AI consumers are more profitable than 3rd party apps. why focus on tiered pricing when you can just name a price point everyone has to pay that only huge AI companies are willing to.
Reddit gets their content for free. Reselling it at a high price to AI/ML consumers is an easy way to turn free content into profit with almost no effort.
The value of LLM's has changed drastically in favor of open source since the Meta weights leak. The proprietary model looks pretty much wrecked now, at least as far as I understand the leaked internal memo from a google researcher last month.
https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
Oh I'm not saying they are doing the right thing or that it was the correct decision. Just speculating whether LLMs is what kicked off the whole thing
I'm saying the premise that LLM's have anything to do with it is either incompetent failure to keep up with LLM developments, or a pack of lies.
I disagree, it's still too early and a bit presumptuous to make such conclusive statements
This is a fascinating read, thank you very much for sharing.
Training data gets gathered with scrapers
IF the owners of the data agree, or, if they disagree, until they take you to court. Getty Images are taking the creators of Dall-E to court, an some tech company is taking MS to court for Copilot
No, law says that if its not supposed to be used for training data it has to be Mashine readable that its not supposed to be used for that. And for scientific purposes its basically irrelevant. You can take to court whoever you want, that doesn't change stuff.
What "law" says that? That's not how copyright works at all. If you don't have an explicit license to use content you don't own, you can't legally use it.
https://www.gesetze-im-internet.de/urhg/__44b.html
German law and that's where many of the data mining companys are located.
Is there an English translation available? That's a hell of a departure from international copyright agreements that I wasn't aware of if it's true.
Act on Copyright and Related Rights (Copyright Act) § 44b Text and Data Mining (1) Text and data mining is the automated analysis of single or multiple digital or digitized works in order to extract information from them, in particular about patterns, trends and correlations. (2) Reproductions of legally accessible works for text and data mining are permitted. The reproductions shall be deleted when they are no longer required for text and data mining. (3) Uses according to paragraph 2 sentence 1 are only permitted if the right holder has not reserved them. A reservation of use in the case of works accessible online shall only be effective if it is made in machine-readable form.
There is no official englisch Translation but DeepL does a good job to my knowledge. If you have further questions just ask, German law is very complicated and very depended on interpretation, its sometimes just barely understandable even for our lawyers...
Yes but nothings stopping scraping of reddit content from the front end
Technically not (well, they can make it harder), but they can sue them for doing it
Sure, but they could do the same thing with an API. Make scraping for LLMs against the TOS; not personal use. I really do think (as the OP says) it's two birds with one stone.
I'm very sure that this is the case. Reddit is pissed they gave away all the content as training data for free while struggling to monetize their platform adequately.
But I suspect the damage is already done. There are projects like "Orca" from Microsoft that skip the learning process from source data for a big part by using chatGPT and GPT4.
They missed the timing but are too stubborn and double down on it
Like, why go after Selig like that if it was about AI?
Why not have a cheaper legacy tier (not even free, just cheaper) so Apollo and other third party apps could stay in business? Only AI needs to get charged the higher price. Instead, it seems there's essentially only one tier and third party apps simply can't afford to pay it.
Why not both? I think they see this as an opportunity to kill two birds with one stone.
Honestly, I think so. It looks like all big tech collected enough data from us, so that they now can create AI models from it. Like a snapshot of humanity for some years
Yes but imo it would be easy to seperate LLM and 3rd party apps since 3rd party apps have users sign in independently. They chose to also target 3rd party apps and take them down.
Reddit's business model was not founded on selling LLM data. Reddit got greedy and decided to change their business model to cash in on an unexpected revenue stream. What was also unexpected (to Reddit) is that you cannot cater to social media users and monetize their data for LLM training effectively at the same time. And now Reddit will have neither, and will die just like all other businesses that adopt Enshitification as a core operating procedure.
Let this be a lesson to them and all that follow: do not let your greed make you blind to the consequences of your actions.
It is, but reddit don’t own the content on their site according to their TOS, posters merely grant them a license to redistribute it. So it’s not really their call to shut off ChatGPT scraping, it should be a community decision
"Merely" - the TOS basically grant Reddit the ability to do what the hell they want with it, LOL
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit.
And furthermore
You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.
Surprisingly tough question. On one hand, I don't think every ex-Reddit user should go "Nah, it's too late, fam" because then it wouldn't even make sense for the devs to make any changes if they had no chance of regaining their userbase. On the other hand, I feel like even if they made really good changes, I would still always be on edge waiting for the bad thing to happen (pretty much what I imagine an abusive relationship to be like).
I think this is the main reason for the insane prices, but it could have easily been avoided. They don't need to have one price class for every type of use of their Data API. They could have easily had one rate for LLM and other AI training uses and another for third party client applications. I feel like at some point they realized they'd rather just kill the third parties while they're at it and this seemed like the logical moment.
Yeah, one of the other answers to the AMA was "we are not profitable yet, unlike the 3rd part app devs..." - that is something that wouldn't sit well with any investor I know
I think the LLM wave hit, they saw dollar signs, and they made a change without thinking it through, but then they were backed into a corner between money and avoiding outrage, but greed won over.
I think that was definitely the impetus - I first read about the changes in this article back in April: https://www.theregister.com/2023/04/18/reddit_charging_ai_api/
The closing statement is interesting:
The spokesperson we talked to also wanted to make clear the Data API was still freely accessible for appropriate use cases through the Reddit developer platform; hopefully app developers and other small-scale operators won't have any surprises ahead this summer.
I suspect they ran the numbers and started seeing dollar signs - they don't care about the third-party apps (which don't make them any money directly), they're just trying to cash in on Microsoft etc.
I have a sneaking suspicion they're going to end up back-pedalling, but it will be too little, too late.
They could have created better licensing models. It does rely on people honoring the agreements but besides countries that disregard IPs I think its a viable model. Their business is social media, not curating datasets.
They could have, probably / maybe, but they are quite inept. What is social media if not a giant dataset?!?
I think both factors play a part.
No. Data scrapers will still scrape the site as long as they want to be indexed by search engines. IMO charging for API access is fine when reasonable. Lying about why you're doing it isn't.
This contains a good explanation of why it's clear this is really about wanting the 3rd party apps to stop existing.
It's a 13 minutes rehashing the same points everyone has been making to death. And it doesn't even mention LLMs
It makes a lot of sense, but with the way organizations such as Internet Archive are saving webpages from Reddit, wouldn't it be feasible to train your models off of those sites to circumvent any API charges?