this post was submitted on 05 Aug 2023
86 points (100.0% liked)

Linux

1258 readers
80 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Hello everyone. I'm going to build a new PC soon and I'm trying to maximize its reliability all I can. I'm using Debian Bookworm. I have a 1TB M2 SSD to boot on and a 4TB SATA SSD for storage. My goal is for the computer to last at least 10 years. It's for personal use and work, playing games, making games, programming, drawing, 3d modelling etc.

I've been reading on filesystems and it seems like the best ones to preserve data if anything is lost or corrupted or went through a power outage are BTRFS and ZFS. However I've also read they have stability issues, unlike Ext4. It seems like a tradeoff then?

I've read that most of BTRFS's stability issues come from trying to do RAID5/6 on it, which I'll never do. Is everything else good enough? ZFS's stability issues seem to mostly come from it having out-of-tree kernel modules, but how much of a problem is this in real-life use?

So far I've been thinking of using BTRFS for the boot drive and ZFS for the storage drive. But maybe it's better to use BTRFS for both? I'll of course keep backups but I would still like to ensure I'll have to deal with stuff breaking as little as possible.

Thank you in advance for the advice.

top 34 comments
sorted by: hot top controversial new old
[–] turdas@suppo.fi 31 points 1 year ago (1 children)

If you're not intending to use complicated RAID setups, just go with btrfs. There is no reason to bother with zfs given your specs and needs.

Do not go with ext4. Unlike both btrfs and zfs, ext4 does not do data checksumming, meaning it cannot detect bit rot (and obviously cannot fix it either). You'll also be missing out on other modern features, like compression and copy-on-write and all the benefits that entails. Once you start using snapshots for incremental backups using btrfs send (or its zfs equivalent), you'll never want to go back. Recommended script: snap-sync.

[–] BitPirate@feddit.de 8 points 1 year ago

ext4 + mdadm + dm-integrity would solve the bit rot problem. But you'd end up with a lot of parts bolted together and still miss out on the features that btrfs/zfs provide.

[–] 30021190@lemmy.cloud.aboutcher.co.uk 20 points 1 year ago (1 children)
[–] spaghetti_carbanana@krabb.org 13 points 1 year ago (3 children)

This for sure. As a general rule of thumb, I use XFS for RPM-based distros like Red Hat and SuSE, EXT4 for Debian-based.

I use ZFS if I need to do software RAID and I avoid BTRFS like the plague. BTRFS requires a lot of hand holding in the form of maintenance which is far from intuitive and I expect better from a modern filesystem (especially when there are others that do the same job hassle free). I have had FS-related issues on BTRFS systems more than any other purely because of issues with how it handles data and metadata.

In saying all that, if your data is valuable then ensure you do back it up and you won’t need to worry about failures so much.

[–] mimichuu_@lemm.ee 3 points 1 year ago (2 children)

Hey, thanks for the help. Can you elaborate on what kind of issues BTRFS gave you? What caused them, too?

[–] spaghetti_carbanana@krabb.org 2 points 1 year ago (1 children)

Sure, I’ve used it both in Server and NAS scenarios. The NAS was where we had most issues. If the maintenance tasks for BTRFS weren’t scheduled to run (balance, defrag, scrub and another one i can’t recall), the disk could become “full” without actually being full. If I recall correctly it’s to do with how it handles metadata. There’s space, but you can’t save, delete or modify anything.

On a VM, its easy enough to buy time by growing the disk and running the maintenance. On a NAS or physical machine however, you’re royally screwed without adding more disks (if its even an option). This “need to have space to make space” thing was pretty suboptimal.

Granted now I know better and am aware of the maintenance tasks, I simply schedule them (with cron or similar). But I still have a bit of a sour taste from it, lol. Overall I don’t think it’s a bad FS as long as you look after it.

[–] mimichuu_@lemm.ee 2 points 1 year ago

Thank you, that makes sense.

[–] ProtonBadger@lemmy.ca 1 points 1 year ago

It needs a bit of periodic maintenance, the btrfs-assistant and btrfsmaintenance packages will set it up and from then on it’s automatic.

ZFS is great but I wouldn't recommend it for single volume setups. I've never lost data with it but the parity has always been the saviour.

Never used BTRFS.

I avoid XFS due to performance reasons as most my systems are comprised of many smaller files which XFS isn't great for. But the usage I've had with it, it's been great

EXT4 is always my go-to for normal usage. Unless I need to be supporting older machines then it's ext2/3.

[–] unwillingsomnambulist@midwest.social 2 points 1 year ago (1 children)

OpenSUSE, both Leap and Tumbleweed, use btrfs by default. Do you switch those to xfs during installation?

I’ve had btrfs snapshots pull me out of the fire multiple times on my home machines, but I don’t fully trust any file system at all, so I rsync stuff to two local network destinations and an off-site location as well. Those, too, have come in handy.

[–] spaghetti_carbanana@krabb.org 2 points 1 year ago

Yep, sure do. I’ve no real benefit for the features it adds, or I’m completely ignorant to the benefits is probably more accurate :)

For the things you’ve mentioned it is useful. I think the main thing I’ve been warned to never do with BTRFS is use it for RAID and to use md under it instead. That said, that could be old info and it may be fixed now.

[–] fmstrat@lemmy.nowsci.com 13 points 1 year ago (2 children)

A lot of these responses seem.. dated. There's a reason TruNAS and such use ZFS now.

I would recommend ZFS 100%. The copy-on-write (allowing you to recover almost anything), simple snapshots, direct disk encryption, and ability to not only check the file system, but tell you exactly which file has an issue if there is an error, make it an easy choice even if its a one-disk system.

Personally I use date times for my snapshot names, and delete old ones as time goes on. Its fabulous for backups.

[–] happyhippo@feddit.it 5 points 1 year ago (1 children)
[–] moist_towelettes@lemm.ee 2 points 1 year ago

Yes its license is not GPL compliant.

[–] JWBananas@startrek.website 1 points 1 year ago

There's a reason TruNAS and such use ZFS now.

Do you mean for the boot drive?

[–] Reborn2966@feddit.it 9 points 1 year ago

btrfs and you get snapshots, the ability to send subvolumes around + compression and a ton of other stuff.

be aware that to configure a good layout in btrfs you will need to do that manually, follow the arch wiki and you will be ok.

[–] sudotstar@kbin.social 7 points 1 year ago* (last edited 1 year ago) (1 children)

I recommend using whatever is the "least hands-on" option for your boot drive, a.k.a your distro default (ext4 for Debian). In my admittedly incompetent experience, the most likely cause for filesystem corruption is trying to mess with things, like resizing partitions. If you use your distro installer to set up your boot drive and then don't mess with it, I think you'll be fine with whatever the default is. You should still take backups through whatever medium(s) and format(s) make sense for your use case, as random mishaps are still a thing no matter what filesystem you use.

Are you planning on dualbooting Windows for games? I use https://github.com/maharmstone/btrfs to mount a shared BTRFS drive that contains my Proton-based Steam library in case I need to run one of those games on Windows for whatever reason. I've personally experienced BTRFS corruption a few times due to the aforementioned incompetence, but I try to avoid keeping anything important on my games drive to limit the fallout when that does occur. Additionally if you're looking to keep non-game content on the storage drive (likely if you're doing 3D modeling work) this may not be as safe.

[–] mimichuu_@lemm.ee 2 points 1 year ago (1 children)

I don't plan on installing Windows at all. The only thing I'd do in my boot drive is have a separate home partition, I won't really do anything else though. Did the corruption you experience happened just on its own? Or was it something you did?

[–] sudotstar@kbin.social 1 points 1 year ago

For me it's always been after I tried to resize a partition.

[–] ryannathans@lemmy.fmhy.net 5 points 1 year ago

Zfs with zraid/mirror and increased copies

It's self healing and can't corrupt on power loss

[–] sibloure 4 points 1 year ago

Been using BTRFS for several years and have never once had any sort of issue. i just choose BTRFS at system setup and never think about it again. I like that when I copy a file it is INSTANT and makes my computer feel super fast, whereas EXT4 can take several minutes to copy large files. This is with similar use to what you describe. No RAID.

[–] worfamerryman 3 points 1 year ago (1 children)

I use ext4 for my boot drive as that’s what Linux mint defaults to.

I do not do raids and use btrfs on my other drives.

You can turn on compression on write with btrfs which may reduce the amount of data being read and written to your drive which could further reduce its lifespan.

But you shouldn’t expect the drives to last 10 years.

They might, but don’t expect it and have a backup of whatever is important. Ideally you should have a local backup and a cloud based backup or at least an offsite backup somewhere else.

[–] mimichuu_@lemm.ee 1 points 1 year ago (1 children)

Yeah I'll always do backups. When I have the money I probably will buy another drive and try to do RAID1 on the two, just to be sure. But I do want them to last as much as possible.

[–] worfamerryman 1 points 1 year ago (1 children)

Don’t use raid for backing up use a backup program instead. I’d recommend vorta or kopia.

[–] mimichuu_@lemm.ee 2 points 1 year ago (1 children)

It wouldn't be for backing up, just for the storage to last longer if one drive fails.

[–] worfamerryman 1 points 1 year ago

I would still not recommend it. If the drive fails and data is lost or corrupted it could also be lost or corrupted on the other drive.

It would really be better to use backup software to save your data. Also depending on how the drive is used, it may put less wear on the second drive if you use a backup application.

[–] mojo@lemm.ee 3 points 1 year ago

ext4 is the tried and true file system. I use that for reliability. Btrfs is nice with a ton of modern features, but I have had some issues in the past, but they are pretty rare.

[–] Andy3153@lemmy.ml 3 points 1 year ago

It's gonna be a hard decision to make. I know that because I read about Btrfs for about a whole week before deciding to switch to it. But, I'm a happy Btrfs user now for about 8 months, and I'll be honest with you, in my opinion, if your application does not mainly involve small random writes that'll make Btrfs inevitably fragment a ton, it is most likely good for any situation. I don't know much about the other modern/advanced filesystems like ZFS or XFS to tell you anything about them though

[–] hornedfiend@sopuli.xyz 1 points 1 year ago

I've been using ext4/btrfs for a long time,but recently I decided to give xfs a try and it feels pretty solid all rounder fs.

I know it's a very old and very well supported fs,developed by Silicon Graphics and has been getting constant improvements over time with various performance improvements andchecksuming. TBH,for my use casesanything would work but BTRFS snapshots were killing my storage and I got bored with the maintenance task.

Archwiki has amazing documentation for all FS,so might be worth a look.

[–] JWBananas@startrek.website 1 points 1 year ago* (last edited 1 year ago) (1 children)

This might be controversial here. But if reliability is your biggest concern, you really can't go wrong with:

  • A proper hardware RAID controller

You want something with patrol read, supercapacitor- or battery-backed cache/NVRAM, and a fast enough chipset/memory to keep up with the underlying drives.

  • LVM with snapshots

  • Ext4 or XFS

  • A basic UPS that you can monitor with NUT to safely shut down your system during an outage.

I would probably stick with ext4 for boot and XFS for data. They are both super reliable, and both are usually close to tied for general-purpose performance on modern kernels.

That's what we do in enterprise land. Keep it simple. Use discrete hardware/software components that do one thing and do it well.

I had decade-old servers with similar setups that were installed with Ubuntu 8.04 and upgraded all the way through 18.04 with minimal issues (the GRUB2 migration being one of the bigger pains). Granted, they went through plenty of hard drives. But some even got increased capacity along the way (you just replace them one at a time and let the RAID resilver in-between).

Edit to add: The only gotcha you really have to worry about is properly aligning the filesystem to the underlying RAID geometry (if the RAID controller doesn't expose it to the OS for you). But that's more important with striping.

[–] ryannathans@lemmy.fmhy.net 4 points 1 year ago (1 children)

Oh great another single point of failure. Seriously, don't use raid cards. With ZFS, there's no corruption on power loss. It's also self healing.

[–] JWBananas@startrek.website 1 points 1 year ago* (last edited 1 year ago) (1 children)

How many hardware RAID controllers have you had fail? I have had zero of 800 fail. And even if one did, the RAID metadata is stored on the last block of each drive. Pop in new card, select import, done.

[–] ryannathans@lemmy.fmhy.net 1 points 1 year ago* (last edited 1 year ago) (1 children)

1/1, irrecoverable array as that particular card was no longer available at time of failure failure Problems that don't exist with ZFS

[–] JWBananas@startrek.website 1 points 1 year ago

I am sorry that you had to personally experience data loss from one specific hardware failure. I will amend the post to indicate that a proper hardware RAID controller should use the SNIA Common RAID DDF. Even mdadm can read it in the event of a controller failure.

Any mid- to high-tier MegaRAID card should support it. I have successfully pulled disks directly from a PERC 5 and imported them to a PERC 8 without issues due to the standardized format.

ZFS is great too if you have the knowledge and know-how to maintain it properly. It's extremely flexible and extremely powerful. But like most technologies, it comes with its own set of tradeoffs. It isn't the most performant out-of-the-box, and it has a lot of knobs to turn. And no filesystem, regardless of how resilient it is, will ever be as resilient to power failures as a battery/supercapacitor-backed path to NVRAM.

To put it simply, ZFS is sufficiently complex to be much more prone to operator error.

For someone with the limited background knowledge that the OP seems to have on filesystem choices, it definitely wouldn't be the easiest or fastest choice for putting together a reliable and performant system.

If it works for you personally, there's nothing wrong with that.

Or if you want to trade anecdotes, the only volume I've ever lost was on a TrueNAS appliance after power failure, and even iXsystems paid support was unable to assist. Ended up having to rebuild and copy from an off-site snapshot.