this post was submitted on 03 Jul 2023
57 points (100.0% liked)

sdfpubnix

14 readers
4 users here now

Fans of SDF

founded 1 year ago
MODERATORS
 

UPDATE: @SDF@lemmy.sdf.org has responded

It is temporary as lemmy.world was cascading duplicates at us and the only way to keep the site up reliably was to temporarily drop them. We’re in the process of adding more hardware to increase RAM, CPU cores and disk space. Once that new hardware is in place we can try turning on the firehose. Until then, please patient.


ORIGINAL POST:

Starting sometime yesterday afternoon it looks like our instance started blocking lemmy.world: https://lemmy.sdf.org/instances

A screenshot of the page at https://lemmy.sdf.org/instances showing the lemmy.world instance on the blocklist

This is kind of a big deal, because 1/3rd of all active users originate there! A pie chart depicting the top instances by usershare. The lemmy.world instance is in the top spot with 1/3 of the total usershare

Was this decision intentional? If so, could we get some clarification about it? @SDF@lemmy.sdf.org

top 44 comments
sorted by: hot top controversial new old
[–] SDF@lemmy.sdf.org 41 points 1 year ago (6 children)

It is temporary as lemmy.world was cascading duplicates at us and the only way to keep the site up reliably was to temporarily drop them. We're in the process of adding more hardware to increase RAM, CPU cores and disk space. Once that new hardware is in place we can try turning on the firehose. Until then, please patient.

[–] chaorace@lemmy.sdf.org 15 points 1 year ago

Thanks for the prompt response! I shall have patience 😁. Post & Title have been edited to reflect your update.

As usual, many thanks for your continued maintenance efforts & personal labor.

[–] sidhant@lemmy.sdf.org 12 points 1 year ago

Thank you, appreciate all the hard work that goes into keeping this instance alive.

[–] 70ms@lemmy.sdf.org 6 points 1 year ago

Thanks for the update (and giving me a new home!).

[–] chaorace@lemmy.sdf.org 6 points 1 year ago

Heads up: a new backend issue has been opened to track performance degradation: lemmy#3466. Looks like there's also a private matrix channel that sprouted up for performance tuning & troubleshooting that you might potentially be interested in joining: info.

[–] Elindio@lemmy.sdf.org 2 points 1 year ago

Can you update this thread when it's over?

[–] SDF@lemmy.sdf.org 31 points 1 year ago (1 children)

live updates in progress. Moved to SSDs, added more cores and 128GB and 64GB ram.

[–] 6jarjar6@lemmy.sdf.org 10 points 1 year ago

Lovely 😍

[–] SDF@lemmy.sdf.org 30 points 1 year ago (3 children)

Here is where we're at now.

  • increased cores and memory, hopefully we never touch swap again
  • dedicated server for pict-rs with its own RAID
  • dedicated server for lemmy, postgresql with its own RAID
  • lemmy-ui and nginx run on both to handle ui requests

Thank you for everyone who stuck around and helped out, it is appreciated. We're working on additional suggested tweaks from the Lemmy community and hope to let lemmy.world try to DoS us again soon. Hopefully we'll do much better this time.

[–] chaorace@lemmy.sdf.org 14 points 1 year ago* (last edited 1 year ago)

Killer stuff! Sorry for my contributing undue pressure on top of what was probably already a taxing procedure happening in the server room.

Out of curiousity: how do you feel about Lemmy performance so far? I'm actually a little bit surprised that we already managed to outstrip the prior configuration. I suppose that inter-instance ActivityPub traffic just punches really hard regardless of intra-instance activity?

[–] estee@lemmy.sdf.org 8 points 1 year ago (1 children)

What can we (ordinary users) do to help? I'm in Europe, would it be better if I used the SDFeu server as my home instance?

[–] SDF@lemmy.sdf.org 12 points 1 year ago

Yes, the fediverse wants to be decentralized, so it is encouraged to use what works best for you. lemmy.sdfeu.org is located in Düsseldorf, Germany.

[–] thomask@lemmy.sdf.org 3 points 1 year ago

This station is now the ultimate power in the fediverse! I suggest we use it.

[–] SDF@lemmy.sdf.org 17 points 1 year ago (2 children)

Two things that would be great:

  • Have a tanoy/horn announce icon at the top like with Mastodon where status information can be posted.
  • Change the heart icon to link to a method that supports the local instance

Attempts were made to create a thread for the almost daily upgrades we're going through with BE and UI changes, but even with pinning it doesn't have the visibility.

We're on site in about 1 hour to install a new RAID and once that is completed we'll finish the transfer of pict-rs data.

[–] entropicdrift@lemmy.sdf.org 10 points 1 year ago* (last edited 1 year ago) (1 children)

You're all doing an amazing job!

I became an ARPA member this past week and am happy to donate again to chip in for this hardware upgrade.

Edit: donated at the $48 yearly level.

[–] robotrono@lemmy.sdf.org 5 points 1 year ago

Just donated to do my part to help keep the lights on. Thanks y'all for doing a great job, I am glad I found SDF.

load more comments (1 replies)
[–] Elindio@lemmy.sdf.org 11 points 1 year ago (3 children)

If you're wondering how to donate, like I was-

https://sdf.org/?faq?MEMBERS?01

[–] entropicdrift@lemmy.sdf.org 10 points 1 year ago

Or if you want a button to donate: https://sdf.org/support/

[–] delial@lemmy.sdf.org 5 points 1 year ago (1 children)

Donated! SDF is what the future of the internet should look like! I love you peeps so much! ♥

[–] 70ms@lemmy.sdf.org 7 points 1 year ago* (last edited 1 year ago)

And yet it was the 90's aesthetic of the website that instantly made me sign up. 🤪 It made me so nostalgic.

Edit: Will also be donating ASAP!

[–] wesker@lemmy.sdf.org 4 points 1 year ago
[–] Milamber@lemmy.sdf.org 5 points 1 year ago (2 children)

Replying to this so I can follow up...

[–] ken_cleanairsystems@lemmy.sdf.org 3 points 1 year ago (2 children)
[–] user224@lemmy.sdf.org 2 points 1 year ago

Ping: response from SDF added.

[–] Elindio@lemmy.sdf.org 1 points 1 year ago (1 children)
[–] user224@lemmy.sdf.org 2 points 1 year ago

Ping: response from SDF added.

[–] user224@lemmy.sdf.org 3 points 1 year ago

Ping: response from SDF added.

[–] sidhant@lemmy.sdf.org 4 points 1 year ago (2 children)

If this is true I will be making another account and moving to a different server. The only reason I joined this one was that it doesn't seem to block/defed.

[–] SDF@lemmy.sdf.org 16 points 1 year ago

You're absolutely welcome to do that as it is your decision and you have many choices. We hope to build a community of folks that would like to help the fediverse grow and support smaller instances. Similar growing pains were seen during the twitter exodous last September.

[–] chaorace@lemmy.sdf.org 10 points 1 year ago (2 children)

As someone working on integrations I'd also have to leave if we're permanently blocking the largest instance, but I'd like to give SDF the benefit of the doubt here. Maybe this was just a temporary measure to deal with the insane load from yesterday?

[–] SDF@lemmy.sdf.org 9 points 1 year ago (1 children)

You are absolutely welcome to do that.

[–] chaorace@lemmy.sdf.org 2 points 1 year ago

I'd really rather not leave, though! I like it here -- but being forced to choose does put me in a bind when it comes to being able to build the things that I want to build. Lucky for me then that today won't be the day where such a decision is thrust upon me.

[–] user224@lemmy.sdf.org 1 points 1 year ago* (last edited 1 year ago) (1 children)

For me lemmy still doesn't work. At least not without U.S. VPN. Otherwise I get error: 502 Bad Gateway

Maybe it's just not fully up yet.

[–] chaorace@lemmy.sdf.org 4 points 1 year ago (1 children)

I don't know if geoblocking is a thing, but just FYI: there are additional Lemmy SDF instances for EU & JP regions which may work better depending on where you're based.

[–] user224@lemmy.sdf.org 3 points 1 year ago

Ah, seems like blacklisted IP. Since I am behind CG-NAT the public IP is simultaneously shared between many users. I tried disconnecting and reconnecting to the network which changed my public IP, and it works now.

We really need the IPv6.

[–] webb@lemmy.sdf.org 1 points 1 year ago (2 children)

This is a good thing. We don't want another mastodon.social situation.

[–] youthinkyouknowme@lemmy.dbzer0.com 2 points 1 year ago (2 children)

What happened on mastodon.social?

[–] webb@lemmy.sdf.org 9 points 1 year ago

It's an absolutely massive instance that's a net negative for the Fediverse. It completely defeats the purpose of federation. The Mastodon devs used to drive people to smaller instances but decided they wanted to be /the/ instance. They made themselves a default in the app. At around the same time other instances started getting a crap ton of spam from them that ate up a bunch of moderator's time on smaller instances. The Fediverse only works in regards to moderation because there is less users per admin, but mastodon.social doesn't have that advantage. A bunch of people defederated from them as a result which was a good thing for the instances that did it. They failed pretty hard at communicating during this time as well.

Having one instance hold a large part of the network is bad for everybody involved. Defederating from monoliths is a healthy thing for networks to do. Building your own web beats any algorithm, can't do that if you're already federating with 99% of people.

[–] entropicdrift@lemmy.sdf.org 4 points 1 year ago (1 children)

It's the biggest instance by far, to the point of being virtually non-viable to defederate from. If they ever sell out or make decisions other instances don't like, tough cookies.

[–] ThorrJo@lemmy.sdf.org 1 points 1 year ago

If they ever sell out

Already happened.

[–] BackStabbath@lemm.ee 2 points 1 year ago (1 children)

What happened with that? I'm new.

[–] wesker@lemmy.sdf.org 5 points 1 year ago (1 children)

Hi new, I'm dad.

Honestly, this low effort comment is just to say your username cracked me up.

[–] BackStabbath@lemm.ee 1 points 1 year ago
load more comments
view more: next ›