this post was submitted on 10 Nov 2023
3 points (100.0% liked)

Self-Hosted Main

21 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

For Example

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.

Useful Lists

founded 1 year ago
MODERATORS
 

I see people having a small 8 gigs and 4 core system and trying to split that with something like proxmox into multiple VMs. I think that's not the best way to utilise the resources.

As many services are mostly in ideal mode so in case something is running it should be possible to use the complete power of the machine.

My opinion is using docker and compose to manage things on the whole hardware level for smaller homelab.

Only split VMs for something critical, even decide on that if it's required.

Do you agree?

top 13 comments
sorted by: hot top controversial new old
[–] bityard@alien.top 2 points 1 year ago

Some people play with VMs for fun, or as a leaning experience. I don't think it's very productive or useful to tell them they're doing it wrong.

[–] Spuxilet@alien.top 1 points 1 year ago

I run about 30 stacks (about 60 containers) on my 1L mini pc with i5 8500T + 12 GB RAM. If i were to split them in their own VMs it would be impossible to do. I would have run out of resources probably on fourth VM :D. 5.8 GB RAM is free on idle and i also have ZRAM enabled. I work on it too i have code-server and cloudbeaver running on it. I never run out of memory. Although i am thinking to upgrade it to 16 GBs. I know RAM IS CHEAP but i do not need more then 16 GBs on this PC.

This setup also does not need to be so complex. I have stacks in their own networks isolated and access them solely from wireguard VPN no matter where i am on LAN or connecting from WAN. Wireguard is always on on my laptop and phone.

[–] mrmclabber@alien.top 1 points 1 year ago

No, I don’t agree, not necessarily. VMs are “heavier” as in use more disk and memory but if they are mostly idling and in a small lab you probably won’t notice the difference. Now if you are running 10 services and want to put each in its own vm on a tiny server, then yea, maybe don’t do that.

In terms of cpu it’s a non-issue. Vm or docker they will still “share” cpu. I can think of cases I’d rather run proxmox and others I’d just go bare metal and run docker. Depends on what I’m running and the goal.

[–] stupv@alien.top 1 points 1 year ago

Proxmox and LXCs vs Docker is just a question of your preferred platform. If you want flexibility and expandability then proxmox is better, if you just want a set and forget device for a specific static group of services, running debian with docker may make more sense to you.

[–] ttkciar@alien.top 1 points 1 year ago

On one hand, I think VMs are overused, introduce undue complexity, and reduce visibility.

On the other hand, the problem you're citing doesn't actually exist (at least not on Linux, dunno about Windows). A VM can use all of the host's memory and processing power if the other VMs on the system aren't using them. An operating system will balance resource utilization across multiple VMs the same as it does across processes to maximize the performance of each.

[–] katbyte@alien.top 1 points 1 year ago

and here i am with a rpi running two VMs 🤭

[–] lilolalu@alien.top 1 points 1 year ago

Read some articles about the resource overhead of VM's, or even better Container which use a shared kernel: it's minimal and mainly effects ram. So if the decision is to put 16gb more into the machine to have a clean seperation of services: I think that's a no brainer.

I do agree with you that complete seperation through VM's is usually overkill, a docker container is enough to isolate config / system requirements etc.

[–] prime_1996@alien.top 1 points 1 year ago

Main advantage in my opinion is being able to easily backup and restore a VM and have other out of the box features linke point in time snapshots, firewall, hardware pass through. This I am talking about Proxmox.

Now that being said I don't use VMs to run my self hosted services, I used to, but migrated all to LXC, which runs docker.

Some services get their own LXC as they are critical like DNS.

[–] krysztal@alien.top 1 points 1 year ago

I've had feeling towards running proxmox on my homelab, but I didn't yet felt like reinstalling and reconfiguring everything so for now its a dockerland and an extra VM for stuff that had to run separate

[–] AnApexBread@alien.top 1 points 1 year ago

It all comes down to "what are you trying to do."

Not everyone runs applications, so docker is not the answer to everything.

But if you only have 8Gb of RAM and are trying to run VMs then I'd advise you to go buy more RAM.

[–] JanBurianKaczan@alien.top 1 points 1 year ago

No.

Stop telling people what to do with their hardware. Its THEIR hardware.

Easy.

[–] ervwalter@alien.top 1 points 1 year ago

It depends on your goals of course.

Personally, I use Proxmox on a couple machines for a couple reasons:

  1. It's way way easier to backup an entire VM than it is to backup a bare metal physical device. And when you back up a VM, because the VM is "virtual hardware" you can (and I have) restore it to the same machine or to brand new hardware easily and it will "just work". This is especially useful in the case that hardware dies.
  2. I want high availability. A few things I do in my homelab, I personally concider "critical" to my home happiness. They aren't really critical, but I don't want to be without them if I can avoid it. And by having multiple proxmox hosts, I get automatic failover. If one machine dies or crashes, the VMs automatically start up on the other machine.

Is that overkill? Yes. But I wouldn't say it "doesn't make sense". It makes sense but just isn't necessary.

Fudge topping on ice cream isn't necessary either, but it sure is nice.