this post was submitted on 12 Jun 2023
128 points (100.0% liked)

Selfhosted

573 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Resources:

> Any issues on the community? Report it using the report flag.

> Questions? DM the mods!

founded 1 year ago
MODERATORS
 

For those self-hosting a lemmy instance, what hardware are you using? I am currently using a small Hetzner VPS. It has 2 vCPU, 2GB RAM and 40GB SSD storage. My instance is currently just in testing with me as the only user, but I plan to use it for close friends or family that may want to try this out, but might not want to sign up for a different instance. My CPU and RAM usage is great so far. My only concern is how large the storage will balloon to over time. I’ve been up for ~20 hours and it’s grown to 1.5G total volume since.

top 50 comments
sorted by: hot top controversial new old
[–] Redex68@lemmy.world 13 points 1 year ago (1 children)

From what I've heard (take this with a huge grain of salt) is that the posts themselves shouldn't take up much of your storage. The biggest thing that could take up your storage are images, but they are only stored on the instances where the community in which they were posted in is.

[–] kresten@feddit.dk 5 points 1 year ago (3 children)

Do you have any suggestions for what the 1.5GB is then?

[–] ubergeek77@lemmy.ubergeek77.chat 11 points 1 year ago (2 children)

It's likely the Docker images, and maybe the Docker build cache if they built from source instead of using the Docker Hub image.

I've been up for about a day longer than OP, and my Lemmy data is still under 800MB. OP either included non Lemmy data in that math, or is subscribed to way more communities than me. My storage usage has been growing much faster today with all the extra activity, but I won't have to worry about storage space for about a month even at this rate.

And that's assuming Lemmy doesn't automatically prune old data. I'm not sure if it does or not. But if it doesn't, I imagine I'll see posts in about 2-3 weeks talking about Lemmy's storage needs and how to manage it as an instance admin.

[–] kresten@feddit.dk 10 points 1 year ago (1 children)

It would be cool to hear back from you guys hosting in about a week or so, so see if it'll just grow linearly, or if slows down at some point.

Sure, I'm curious too. I'll keep an eye on the usage!

[–] culturerevolt@culture0.cc 3 points 1 year ago (1 children)

I used the ansible route to get going. I am subbed to ~150 communities currently. Some of those won't stay, but for now I am subbing to almost anything to see how that affects disk usage. I am interested to see how, or if, it levels off over time and what a week or two out looks like. I expect by then we will all have many more tips for each other as we trial and error our way through.

Here's my current usage:

[–] ubergeek77@lemmy.ubergeek77.chat 2 points 1 year ago* (last edited 1 year ago) (1 children)

Ahhhh, image posts are where your usage is going! Makes sense, my instance is just for my account and I don't submit anything. Your postgres size is more or less in line with where mine was at your uptime. I'm using Docker Compose so I'm only considering the size of the volumes in my metrics, not the image sizes or anything.

[–] culturerevolt@culture0.cc 2 points 1 year ago (1 children)

Yeah, images are where the main bulk of the storage is going. Interestingly, my instance is also just for my account presently and I have not submitted any images until my screenshot above. So these images are just those that are being pulled from other instances. I was under the impression that images were hosted from their respective instance and not saved locally, so I am curious to see how this plays out long term.

[–] falcon15500@lemmy.nine-hells.net 4 points 1 year ago (2 children)

Thumbnails are stored locally, I believe.

[–] dimspace@lemmy.world 3 points 1 year ago

if its only thumbnails and only impacting that local instance i presume you can just cron to clear out old thumbnails regularly if space is a huge issue.

[–] howdy@thesimplecorner.org 3 points 1 year ago

Confirmed. Investigated that earlier.

[–] Redex68@lemmy.world 5 points 1 year ago (1 children)

It's probably from the instance finding other instances and communities and saving them locally. But I don't know too much about how it actually works so I could be wrong. I also heard that they are only stored locally if someone on the instance subscribes to a community, so if that is the case my theory wouldn't make sense.

[–] kresten@feddit.dk 4 points 1 year ago

No worries, alright

[–] infogic@lemmy.world 10 points 1 year ago (2 children)

Using Oracles "Always free" instances.

4 vCPU (ARMv8, not sure about the speed) 24 GB RAM 200 GB Flash storage

[–] Shamot3@sh.itjust.works 6 points 1 year ago (2 children)
[–] Lantier@lemmy.world 7 points 1 year ago (1 children)

It is but Oracle can take it back without justification.

[–] infogic@lemmy.world 2 points 1 year ago

Yeah exactly, I wouldn’t recommend it for anything production grade if you’re not paying

[–] Lemmington@sopuli.xyz 4 points 1 year ago

Until everyone starts doing it, Oracle can be...fun to deal with

[–] Midas@toast.ooo 1 points 1 year ago (1 children)

Did you use any specific tags to get the containers running on OC?

load more comments (1 replies)
[–] jon@lemmy.tf 8 points 1 year ago (1 children)

4vcpu (Ryzen), 8GB RAM, 256gb disk (which will be expanded when it gets to like 60% full). Not too worried about storage unless I get a bunch of image-happy users, text all comes in as json and goes straight to Postgres so it’s not a concern.

[–] Guy_Fieris_Hair@lemmy.world 2 points 1 year ago (1 children)

How many users do you have? Not starting a server any time soon, just curious. A you seem to have one of the bigger ones in this thread and are using them for privately. Are you public?

[–] jon@lemmy.tf 2 points 1 year ago

Mine is public, yes. Not sure how many active users I have, 28 signups but my sidebar shows 5 monthly active users so far. I imagine this will pick up once people start commenting and posting more.

[–] johntash@eviltoast.org 8 points 1 year ago

It's definitely overkill, but right now I'm hosting it on my nomad cluster. It only has 4cpu/8gb allocated at the moment but will autoscale (vertically) if needed. I already have a separate postgres cluster used for other things so I'm just borrowing that for now too. I haven't tried running multiple instances yet but I'll probably test that out this week to see how/if it works.

[–] meldrik@lemmy.wtf 7 points 1 year ago

I selfhost on my own homeserver. At the moment, I've spared it 2 cores, 1GB of RAM and 32GB of NVMe storage mirrored.

[–] darkknight 7 points 1 year ago

I have thought about running my own, following this for the info.

[–] Wintermute@lemmy.villa-straylight.social 7 points 1 year ago* (last edited 1 year ago)

Currently a 1CPU/2GB RAM Linode instance for 26 users. Linode's pricing gets insane as you scale up though so I will definitely be looking elsewhere if I need to scale much bigger. I think I could get away with 1gb of RAM at half the cost right now, but I'm also hosting a Matrix homeserver on this VPS and Synapse is a hungy boy.

[–] david@quo.ink 7 points 1 year ago

1 vCPU, 1GB Ram, 50GB storage using the smallest x86_64 compute instances on Oracle Cloud. Qualifies for always free which is nice while I'm simply testing out a personal server. It's working just fine within those constraints. For now, at least.

Like you, I'm worried about storage. I would like to run it from home, but I live in the woods and my internet isn't reliable enough.

[–] sascamooch@lemmy.sascamooch.com 6 points 1 year ago* (last edited 1 year ago) (1 children)

I'm using a Ramnode VPS since I had some unused credit I wanted to use up. 2 vcpu, 1 GB ram, and 35 GB ssd.

Seems to be working well enough so far, but right now it's just me. If I open up to more users, I might need to upgrade, but we'll cross that bridge when we get there.

Edit: I may have spoke too soon; had to reboot the server due to low memory. Hopefully a swap file will alleviate that a bit, but I might have to upgrade the RAM on this server. We'll see.

Yeah, 1gb is definitely near the lower limit.

[–] flea@hive.atlanten.se 6 points 1 year ago (2 children)

I just spun up my own instance. Trying it out. New to hosting lemmy. How do you guys list the disk usage? Where is lemmy? /Newb

[–] culturerevolt@culture0.cc 5 points 1 year ago (1 children)

I used the ansible method to get running and I am using the default paths. If you are also using the default paths, you can find your data in /srv/lemmy// . This location will hold your configuration files and your volumes directory. The volumes directory holds postgres (the database), pictrs (your image hosting) and lemmy-ui (the web-ui for lemmy). To see how much disk space you are using:

cd /srv/lemmy/<domain>/

du -hc --max-depth=1 volumes

replace with your domain.

[–] flea@hive.atlanten.se 4 points 1 year ago* (last edited 1 year ago)

Thanks! Not so bad apparently.

8.0K	volumes/lemmy-ui
271M	volumes/postgres
424M	volumes/pictrs
[–] nx2@feddit.de 5 points 1 year ago (1 children)

If you are using docker just look at the volumes of the containers of the server, the UI and the two databases (one for stuff, one for pictures)

[–] ElGatoEsBlanco@labdegato.com 5 points 1 year ago

It's currently running on a proxmox VM on my R720. Probably gonna shift it to a VPS at some point.

[–] howdy@thesimplecorner.org 5 points 1 year ago (1 children)
  • 1 vCPU 2.9ghz
  • 1 GB DDR4 Memory
  • 25 GB NVMe/SSD Storage

5~ USD a month. Working great for personal use and I'd imagine a handful of users. Hosted in a data center that is very close to me.

Also fwiw: 4 days of lemmy. I am subbed to a bunch of stuff. I've only uploaded like three pictures to my instance... All that space is thumbnails from other instances.

692M    ./postgres
8.0K    ./lemmy-ui
499M    ./pictrs
1.2G    .
1.2G    total
[–] culturerevolt@culture0.cc 3 points 1 year ago (1 children)

There's my current disk usage. I've gone wild subscribing to just about every community I come across to see how the storage adds up. Right now I've got ~150 communities subbed. We'll see how it goes and when I'll need to expand the storage.

[–] howdy@thesimplecorner.org 3 points 1 year ago (1 children)

Not too bad... How long has your instance been up? Next thing I want to investigate is the postgrea database itself. On my to-do list.

[–] culturerevolt@culture0.cc 4 points 1 year ago

Been running ~22 hours at this point

[–] Dax87@lemmy.zip 4 points 1 year ago* (last edited 1 year ago) (1 children)

I have it running on my microk8s single node cluster. It's a dual xeon (40 cores total) with unfortunately only 64gb ram. The motherboard's max. I got a das with 72tb storage, currently in btrfs mirrored. Hoping btrfs raid-like configs become more usable in the future. I was using zfs but I always ran into issues.

[–] fourstepper@lemmy.ml 1 points 1 year ago

Interesting - I always ran into issues with btrfs so now I am using ZFS exclusively :D

[–] knaak@lemmy.timgilbert.be 4 points 1 year ago

I am running mine on an old Optiplex that i bought to run pihole and plex. It seems to be working fairly well although I am still trying to understand how all of this works.

[–] angrylittlekitty@lemmy.one 3 points 1 year ago (1 children)

shower thoughts... and still on my first cup of coffee to more just musing than anything ...

if storage is the concern wonder if the lemmy roadmap might one day include an option to use cloud based storage?

azure storage at .06/gb per month is likely cheaper and more redundant than local storage - even if you factor in calls to the blob which could be lowered via caching.

cloud storage potentially might one day lead to a option for smaller self hosters to opt into a shared blob instance where the and cost is shared.

in this scenario security to ensure the cloud blob couldn't be deleted would need to be thought through (maybe splitting the password among multiple admins with each having one part of the whole?) but might be one way to better encourage more self hosting for them compute side of things.

[–] Senseibu@feddit.uk 2 points 1 year ago

If I self hosted my own Lemmy on my home server, just for myself and I posted / uploaded images on it, when another user from another instance views my image, they cache it, would this mean later down the line if I deleted to free up server space, if someone else on that instance was to come across my image after deletion, because it was previously cached, the image would still show?

Wondering if rolling storage is possible eventually, where an archive of posts older than 2 years is performed and data deleted.

[–] JTR@lemmings.basic-domain.com 2 points 1 year ago (1 children)

Hetzner vps, 2 vCPU, 8GB ram & 80GB SSD (Fully intended to setup a mail server at one point on that server hence the 8GB ram)

[–] neoney@lemmy.world 2 points 1 year ago

Good luck getting through spam filters

[–] tjr@innernet.link 2 points 1 year ago (1 children)

We have our instance running on a colo server. I am likely going to rebuild ansible to use a custom pict-rs docker container which offloads images to object storage so I don't need to store media locally on said server.

load more comments (1 replies)

I'm running it on an LXC container that lives on a proxmox cluster.

2 vCPU at 2.6Ghz. 2GB of RAM (it's LXC so I can allocate more if needed...) and 40GB of SSD-backed CEPH storage. I actually just upped this to 150GB because I can see the velocity of data I'm storing for this. I have about 2 more TB of storage on the CEPH cluster before I need to order a few more SSDs.

I have terrible internet, but I do have a static address. And they're installing fiber in my neighborhood right now. So that will change soon too.

Based on what I've seen thus-far, I suspect I can handle about a hundred users on this without much issue.

[–] Protegee9850@lemmy.world 2 points 1 year ago

Stack of Pi’s and a J4125 for streaming works well for my usecase

load more comments
view more: next ›