this post was submitted on 05 Jun 2023
156 points (100.0% liked)

Lemmy

496 readers
1 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.

founded 4 years ago
MODERATORS
 

With forewarning about a huge influx of users, you know Lemmy.ml will go down. Even if people go to https://join-lemmy.org/instances and disperse among the great instances there, the servers will go down.

Ruqqus had this issue too. Every time there was a mass exodus from Reddit, Ruqqus would go down, and hardly reap the rewards.

Even if it's not sustainable, just for one month, I'd like to see Lemmy.ml drastically boost their server power. If we can raise money as a community, what kind of server could we get for 100$? 500$? 1,000$?

you are viewing a single comment's thread
view the rest of the comments
[–] OsrsNeedsF2P@lemmy.ml 14 points 1 year ago (1 children)

What are you seeing in the code that makes it hard do scale horizontally? I've never looked at Lemmy before, but I've done the steps of (monolithic app) -> docker -> make app stateless -> Kubernetes before and as a user, I don't necessarily see the complexity (not saying it's not there, but wondering what specifically in the site architecture prevents this transition)

[–] RoundSparrow@lemmy.ml 47 points 1 year ago* (last edited 1 year ago) (2 children)

Right now it looks to me like Lemmy is built all around live real-time data queries of the SQL database. This may work when there are 100 postings a day and an active posting gets 80 comments... but it likely doesn't scale very well. You tend to have to evolve to a queue system where things like comments and votes are merged into the main database in more of a batch process (Reddit does this, you will see on their status page that comments and votes have different uptime tracking than the main website).

On the output side, it seems ideal to have all data live and up to the very instant, but it can fall over under load surges (which may be a popular topic, not just an influx from the decline of Twitter or Reddit). To scale, you tend to have to make some compromises and reuse output. Some kind of intermediate layer such as every 10 seconds only regenerate the output page if there has been a new write (vote or comment change).

don’t necessarily see the complexity (not saying it’s not there

It's the lack of complexity that's kind of the problem. Doing direct SQL queries gets you the latest data, but it becomes a big bottleneck. Again, what might have seemed to work fine when there were only 5000 postings and 100,000 total comments in the database can start to seriously fall over when you have reached the point of accumulating 1000 times that.

[–] sam_uk@slrpnk.net 8 points 1 year ago (2 children)

Out of curiosity how would https://kbin.social/ source: https://codeberg.org/Kbin/kbin-core stand up to this kind of analysis? Is it better placed to scale?

[–] poVoq@slrpnk.net 8 points 1 year ago (1 children)

The advantage kbin has is that it is build on a pretty well known and tested php Symphony stack. In theory Lemmy is faster due to being built in Rust, but it is much more home-grown and not as optimized yet.

That said, kbin is also still a pretty new project that hasn't seen much actual load, so likely some dragons linger in its codebase as well.

[–] sam_uk@slrpnk.net 2 points 1 year ago (1 children)

I think it's probably undesirable to end up with big instances. I think the best situation might be one instance that's designed to scale. This could be lemmy.ml or another one. It can absorb these waves of new users.

However it's also designed to expire accounts after six months.

After three months it sends users a email explaining it's time to choose a server, it nags them to do so for a further three months. After that their ability to post is removed. They remain able to migrate their account to a new server.

After 12 months of not logging in the account is purged.

[–] andrew@radiation.party 2 points 1 year ago (1 children)

Thought on this a bit more, and I’m thinking encouraging users not to silo (and make it easy to discover instances and new communities) will probably be the best bet for scaling the network long-term.

“Smart” rate limiting of new accounts and activity per-instance might help with this organically. If a user is told that the instance they’re posting on is too active to create a new post, or too active to accept new users, and then is given alternatives, they might not outright leave.

[–] sam_uk@slrpnk.net 1 points 1 year ago (1 children)

That might work, is there some third party email app that could capture their email and let them know when registrations are open again? I know of some corporate/not privacy respecting ones such as https://kickofflabs.com/campaign-types/waitlist/ but presumably there's a way to do that with some on-site tools?

[–] andrew@radiation.party 2 points 1 year ago (1 children)

If the instance in question has email support, I don't see why the instance couldn't notify them directly - but I think providing alternative instances first (with the option to get notified if this instance opens up) would be more reasonable

[–] sam_uk@slrpnk.net 2 points 1 year ago

The idea would be to retain the ability to collect email addresses, beyond the point that the main app can't keep up. So you'd want something lightweight just for capturing the emails.

[–] RoundSparrow@lemmy.ml 6 points 1 year ago

I don't have any experience with that specific app, so I don't currently know.

[–] Berserkware@lemmy.ml 3 points 1 year ago

Do you know of any resources about this, and/or how to implement it?