this post was submitted on 26 Nov 2023
1 points (100.0% liked)

Homelab

22 readers
1 users here now

Rules

founded 1 year ago
MODERATORS
 

I am thinking of running multiple database/messages servers such as postgres, redis, mongo, influxdb on old computers and use a centralized NAS to store the data.

One advantage is that I don't need to worry about getting appropriate local storage for the computers that host these services. For example, I could run postgres on an old desktop with spinning disks or even a raspberry pi.

But I am still concerned about the I/O speed and reliability. I wonder if anyone has experience doing it like this. Can you share the pros and cons? I really appreciate it.

top 11 comments
sorted by: hot top controversial new old
[–] raven2611@alien.top 1 points 1 year ago

You can compensate the higher latency with keeping most or all of the database in memory (at least with postgres or mysql) aka caching. This will help you with database reads but not with writes. Writes need to hit a persistant storage layer first. When it comes to reliability I usually rely on the replication features shipped by the DBMS.

Here is an in depth blog about local and distributed storage with MySQL. https://blog.koehntopp.info/2022/09/27/mysql-local-and-distributed-storage.html

Can´t really say much about other types of databases though.

[–] whoooocaaarreees@alien.top 1 points 1 year ago

If your NAS can keep up and your networking gear can keep up…. It might work for you.

You need to give a better info about what you have, what you want to run, and how you want to run it. How tolerant of data loss are you in there event of hardware failure?

I feel like if you have an excess of old machines and a NAS a kube cluster might be your next stop.

I wouldn't use NFS/SMB, but iscsi could be an option if the NAS supports the protocol

[–] ZarehD@alien.top 1 points 1 year ago

It's not a recommended topology (I/O perf & reliability), but if you do this, you'll want to use a "block storage" protocol (e.g. iSCSI, etc.), and a very fast network (i.e. 10Gbps min, ideally 25 or 40).

[–] SuperQue@alien.top 1 points 1 year ago (1 children)

IMO, you really don't want to run databases over NFS/SMB. I've just seen too many corruption and locking issues over the years to trust it.

For home lab use, do you really need anything as big enough for a database that wouldn't fit on a Pi anyway?

For example, I run Prometheus on my Pi, no problem.

[–] Kapelzor@alien.top 1 points 1 year ago (1 children)

How do you keep your card from not dying?

[–] SuperQue@alien.top 1 points 1 year ago

I use a Pro card meant for cameras. Also, Prometheus disk write load is tiny.

[–] morrisdev@alien.top 1 points 1 year ago

Putting your DB files on a NAS drive is like having a restaurant where the cook has to go 2 blocks down the street to get each ingredient, and put them back immediately after using them.

[–] Hatred_grows@alien.top 1 points 1 year ago

Sqlite has lock problems with NFS storage, Mysql and Redis works fine. Smb should not be used for database storage in any way.

[–] kY2iB3yH0mN8wI2h@alien.top 1 points 1 year ago

I don't need to worry about getting appropriate local storage for the computers that host these services

not sure I understand, what OS should these computers run? should this also be NAS storage? Or is it just the database filesystem that should be on the NAS?

honestly it looks like you don't have the knowhow to setup shared storage, it seems you are more a developer type of person, so KISS (Keep It Simple Stupid)

run local storage, your databases won't be even gigabytes in size, get some cheap SSD drives instead (SATA) if you want I/O.

also all your databased won't dive when your accidentally pull the power plug to your NAS or trip over the ethernet cable.

[–] zyzhu2000@alien.top 1 points 1 year ago

Thanks, everyone. My motivation was to turn the many little computers with tiny or slow disks into servers by adding some centralized storage. But it looks like NAS today is not a good solution for this -- at least not yet. I appreciate all the insights.