this post was submitted on 16 Jun 2023
15 points (100.0% liked)

Reddit Migration

458 readers
1 users here now

### About Community Tracking and helping #redditmigration to Kbin and the Fediverse. Say hello to the decentralized and open future. To see latest reeddit blackout info, see here: https://reddark.untone.uk/

founded 1 year ago
 

I've been thinking about this today..

If there's a divide among people who want to still use Reddit and others who want to try out the Fediverse (Kbin, Lemmy, etc), I believe it may be a good idea if we had a bot that could mimic at least the posts (and maybe later even the comments) on subreddits that people are missing out here in the Fediverse. This can at least help populate the emptier communities that are here on the Fediverse and incentivize people to remain here without the fear of FOMO (myself included).

Is there any existing solution that could provide the necessary functionality? How feasible would this be if someone would start working on this now, considering the Reddit API changes?

you are viewing a single comment's thread
view the rest of the comments
[–] NotTheOnlyGamer@kbin.social 6 points 1 year ago (2 children)

Disregard the API completely and just scrape the web interface of specific subreddits. It really doesn't matter that things won't update in real time.

[–] Aeonx@kbin.social 3 points 1 year ago

I was thinking about that. In theory if I can see it, what is stopping a bot from grabbing it. I notice when I use search engines they have no problem seeing what is inside reddit - so there has to be a way to create a bot that does this. I'm just not programmer savvy in that way to know how it works.

[–] yungsinatra@kbin.social 1 points 1 year ago (1 children)

Don't platforms like Facebook or Instagram have detection against web scraping and they ban you really fast for that or something similar? I'd imagine Reddit would have some protection against that as well, no?

[–] TGhost@lemmy.fmhy.ml 1 points 1 year ago* (last edited 1 year ago)

I'm not expert, for me u are blocked when u exceed an request limits. Don't code ur scrapper directly on google or the site but with an copy of the page.

I've been blocked when I coded , never at using it. But I'm not an expert, just amateur so it's a pure guess