To avoid a single point of failure for each new server, I would add $15 Inland SSD per server to zero $ budget.
Homelab
Rules
- Be Civil.
- Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
- No memes or potato images.
- We love detailed homelab builds, especially network diagrams!
- Report any posts that you feel should be brought to our attention.
- Please no shitposting or blogspam.
- No Referral Linking.
- Keep piracy discussion off of this community
What OS are you running? If the main storage is on the network, chances are the OS can run from anywhere.
- get a 100GB HDD from Craigslist for free or a few bucks
- any old crappy USB stick.
If you still want to go for PXE, you don't need any fancy networking. All you need is a DHCP-Server and a TFTP server with a Kernel and an initramfs. I think DNSmasq can handle everything with a bit of configuration. Or you go for a full server provisioning tool like Cobbler or theforeman
You assign each server their own image by placing the file in the directory /var/lib/tftpboot//Kernel
(something along those lines)
You could use iSCSI for block storage and not SAN. Each machine would have its own LUN.
However, time/heart ache/frustration and learning curves are all worth something. Newegg has a reasonable SSD for $16 total including free shipping. I'd find a way to save up $64 bucks myself, even if it took a month or two and boot one new machine every other week.
You can go down to literally a usb stick or even micros card if they support it. Esxi works on an SD card with a few config tweaks.
You will need to PXE boot into a RAM disk and then use iSCSI/NFS/CEPH/etc for persistent storage.
You can actually boot all your servers from USB into a RAM disk. From there you can use iSCSI or NFS or Samba to mount data on your servers. It's what I do too and I have even written a guide how to set it up
With the price of ssds what they are now for a small 100gb why bother with the additional setup and potentially failure points.
I’ve run esxi through network and even that wasn’t fun with longish boot times. I certainly wouldn’t like to run proxmox that way. These days there’s really no reason not to have “some” fast direct storage in each server even if it’s just used mainly as cache.
What you’re looking for is possible but to me the saving of $20 ish per machine just isn’t worth introducing more headaches.
I am an advocate of FC Protocol. I love it. Much more then iSCSI. But I hate SAN Booting. It is a pain in the ass. And you need a Server to host the images. You have to build up a SAN Infrastructure. I guess 2 boot SSD's are cheapter. 2x 64GB NVMe SSD with a PCIe Card or 2x 64GB SATA SSD's cost next to nothing.
I use NFS roots for my hypervisors, and iSCSI for the VM storage. I previously didn't have iSCSI in the mix and was just using qcow2 files on the NFS share, but that had some major performance problems when there was a lot of concurrent access to the share.
The hypervisors use iPXE to boot (mostly; one of them has gPXE on the NIC, so I didn't need to have it boot to iPXE before the NFS boot).
In the past I have also use a purely iSCSI environment with the hypervisors using iPXE to boot from iSCSI. I moved away from it because it's easier to maintain the single NFS root for all the hypervisors for updates and the like.
How? Are you loading a configuration in a device plugged in each hypervisor server? Any project i should read further?
The servers use their built-in NIC's PXE to load iPXE (I still haven't yet figured out how to flash iPXE to a NIC), and then iPXE loads a boot script that boots from NFS.
Here is the most up-to-date version of the guide I used to learn how to NFS boot: https://www.server-world.info/en/note?os=CentOS_Stream_9&p=pxe&f=5 - this guide is for CentOS, so you will probably need to do a little more digging to find one for Debian (which is what Proxmox is built on).
iPXE is the another component: https://ipxe.org/
It's worth pointing out that this was a steep learning curve for me, but I found it super worth it in the end. I have a pair of redundant identical servers that act as the "core" of my homelab, and everything else stores its shit on them.