this post was submitted on 13 Jun 2023
104 points (100.0% liked)

Selfhosted

573 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Resources:

> Any issues on the community? Report it using the report flag.

> Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I see many posts asking about what other lemmings are hosting, but I'm curious about your backups.

I'm using duplicity myself, but I'm considering switching to borgbackup when 2.0 is stable. I've had some problems with duplicity. Mainly the initial sync took incredibly long and once a few directories got corrupted (could not get decrypted by gpg anymore).

I run a daily incremental backup and send the encrypted diffs to a cloud storage box. I also use SyncThing to share some files between my phone and other devices, so those get picked up by duplicity on those devices.

top 50 comments
sorted by: hot top controversial new old
[–] Sekoia@lemmy.blahaj.zone 18 points 1 year ago (2 children)

Backups? What backups?

Ik it's bad but I can't be bothered.

[–] palitu@lemmy.perthchat.org 9 points 1 year ago (1 children)
[–] xavier666@lemm.ee 2 points 1 year ago

Exactly! I pray every morning.

[–] nii236@lemmy.jtmn.dev 3 points 1 year ago
[–] tomhellier@lemmy.ml 12 points 1 year ago (1 children)
load more comments (1 replies)
[–] dead@keylog.zip 10 points 1 year ago

What's my what lmao?

[–] sambal@lemmy.world 9 points 1 year ago (1 children)

I use rclone to encrypt and send my most valuable data to OneDrive.

[–] rhys@mastodon.rhys.wtf 2 points 1 year ago (1 children)

@sambal @kat I do the same with AWS Glacier. Rclone's crypt module is magic.

I'd prefer to not use Amazon or any of the other tech giants but nowhere else has a comparable or comparably priced equivalent to Glacier.

load more comments (1 replies)
[–] KitchenNo2246@lemmy.world 9 points 1 year ago (2 children)

I use borgbackup + zabbix for monitoring.

At home, I have all my files get backed up to rsync.net since the price is lower for borg repos.

At work, I have a dedicated backup server running borgbackup that pulls backups from my servers and stores it locally as well as uploading to rsync.net. The local backup means restoring is faster, unless of course that dies.

[–] gabe565@lemmy.cook.gg 3 points 1 year ago

+1 for Borg! I use Borgmatic to backup files and databases to BorgBase. It costs me $80/yr for 1TB of backups which I think is sensible. I also selfhost an instance of Healthchecks.io for monitoring.

load more comments (1 replies)
[–] davad@lemmy.world 7 points 1 year ago (1 children)

Restic using resticprofile for scheduling and configuring it. I do frequent backups to my NAS and have a second schedule that pushes to Backblaze B2.

[–] fbartels@lemmy.one 4 points 1 year ago

Another +1 for restic. To simplify the backup I am however using https://autorestic.vercel.app/, which is triggered from systemd timers for automated backups.

[–] OutrageousUmpire@lemmy.world 6 points 1 year ago* (last edited 1 year ago) (1 children)

I realized at one point that the amount of data that is truly irreplaceable to me amounts to only - 500GB. So for this important data I back up to my NAS, then from there backup to Backblaze. I also create M-Discs. Two sets, one for home and one I keep at a fiends’ place. Then because “why not” and I already had them sitting around I also backup to two sd cards and keep them on site and off site.

I also backup my other data like tv/movies/music/etc but the sheer volume of data gives me one option, that being a couple usb hard drives I back up to from my NAS.

load more comments (1 replies)
[–] huojtkeg@lemmy.world 6 points 1 year ago (2 children)
[–] The_Traveller101@feddit.de 4 points 1 year ago

Restic is so awesome and in combination with backblaze it’s probably the most cost effective solution.

load more comments (1 replies)
[–] bbbutch@feddit.de 6 points 1 year ago (1 children)

i backup locally to a second NAS (daily)

i use rclone crypt to backup to the cloud (hetzner storage box, weekly)

the most important stuff i also backup to an external harddisk (from time to time, whenever i'm in the mood / have some spare time)

[–] steven@feddit.nl 3 points 1 year ago (1 children)

You basically described my backup strategy, although I do the Hetzner box daily too (on 1gbit synchronous fiber, so why not)

load more comments (1 replies)
[–] Elbullazul@lem.elbullazul.com 5 points 1 year ago* (last edited 1 year ago) (1 children)

I run a restic backup to a local backup server that syncs most of the data (except the movie collection because it's too big). I also keep compressed config/db backups on the live server.

I eventually want to add a cloud platform to the mix, but for now this setup works fine

[–] tgxn@lemmy.tgxn.net 3 points 1 year ago (2 children)

Restic is great! I run it in a container using mazzolino/restic image hooked up to Backblaze for all my important stuff!

load more comments (2 replies)
[–] hal@sopuli.xyz 4 points 1 year ago

restic + rclone crypt + whatever storage server/service is good enough. currently using hetzner storage for my backups. because they've auto snapshots on top of my backups.

I also use this setup for backups on servers, not only at home

[–] thatsnothowyoudoit@lemmy.ca 4 points 1 year ago* (last edited 1 year ago)

Large/important volumes on SAN-> B2.

Desktop Macs -> Time Machine on SAN & Backblaze (for a few)

Borgbackup is great and what we used for all our servers when they were pets. It's a great tool, very easy to script and use.

[–] Showroom7561@lemmy.ca 4 points 1 year ago (2 children)

All devices backup to my NAS either in realtime or at short intervals throughout the day. I use recycling bins for easy restores for accidentally deleted files.

My NAS is set up on a RAID for drive redundancy (Synology RAID) and does regular backups to the cloud for active files.

Once a day I do a hyperbackup to an external HDD.

Once a month I backup to an external drive that lives offsite.

Backups to these external HDDs have versioning, so I can restore files from multiple months ago, if needed.

The biggest challenge is that as my NAS grows, it costs significantly more to expand my backups space. Cloud storage and new external drives aren't cheap. If I had an easy way to keep a separate NAS offsite, that would considerably reduce ongoing costs.

load more comments (2 replies)
[–] linearchaos@lemmy.world 4 points 1 year ago

Irreplaceable media: NAS->Back blaze NAS->JBOD via duplicacy for versioning

Large ISOs that can be downloaded again, NAS -> JBOD and or NAS -> offline disks.

Stuff that's critical leaves the house, stuff that would just cost me a hell of a lot of personal time to rebuild just gets a copy or two.

[–] raphael@lemmy.mira.pm 4 points 1 year ago* (last edited 1 year ago)

I backup locally to my NAS with Synologys Drive software, the NAS does a 10 day rolling snapshot of the backup folder. First I then had Hyper Backup set up to do a versioned backup from the NAS to a cloud provider.

But I got scared of the thought that a corruption would propagate through the whole backup chain. So now I do an additional backup for the most important stuff directly from my PC with restic + resticprofile to a Hetzner storage box. I know they do not give any promises about data reliability, but I think chances of the local and remote backup breaking at the same time are pretty slim.

Restic is sending a fail/done ping to an uptime-kuma instance I host myself to monitor the backup which then notifies me with ntfy if backups fail or are missed for a couple of days.

[–] DawnOfRiku@lemmy.world 4 points 1 year ago

Personal files: Syncthing between all devices and a TrueNAS Scale NAS. TrueNAS does snapshots 4 times a day, with a retention policy of 30 days. From there, a nightly sync to Backblaze B2 happens, also with a 30 day retention policy. Occasional manual backups to external drives too.

Homelab/Servers: Proxmox VM and LXC container exports nightly to TrueNAS, with a retention policy of 7 days. A separate weekly export happens to a separate TrueNAS share, that gets synced to B2 weekly, with a retention policy of 30 says. Also has occasional external drive backups.

[–] knaak@lemmy.world 4 points 1 year ago

I have a raspberry pi with an external drive with scripts to rsync each morning. Then I have S3 deep glacier backups for off site.

[–] milan@discuss.tchncs.de 3 points 1 year ago* (last edited 1 year ago)

I usually just use Restic (not just for servers), for big databases i pipe pg_dump directly into it, and for even bigger ones, i recently moved to pgBackRest.

I ping to a selfhosted Healthchecks instance to see if my backups still run. (or the other way around)

On my main desktop (which recently became a mac, i am sorry) – i currently use Autorestic for multiple locations... its nice to have that yaml, but – well – i am used to bashscripts anyway so it is not that big of a benefit i guess.. .

[–] paco@fedia.io 3 points 1 year ago

321 strategy: 3 copies of everything important, 2 on-site, 1 in cloud. I have a TrueNAS Scale NAS running RAID5 on ZFS. All the laptops, desktops, etc. backup to the NAS. (Mostly Macs, so we use time machine over the network). So the original laptop/desktop is 1 copy. The NAS is a second copy on-site, and then TrueNAS has lots of cloud options. I use Amazon S3 myself, but there are lots of choices.

Prior to this I had a Synology NAS. It was "small" (6TB), so it has a RAID mirror of 6TB drives and a single 6TB external USB that had a backup of the mirrored pair (second copy on-site). Then I also used Synology's software to backup to S3.

For my Internet-facing VMs, they all run in xcp-ng and I use Xen Orchestra to manage them. I run regular snapshots nightly, and then use NFS to copy them to a cloud server. That's sloppy, and sometimes doesn't work. So the in-the-house stuff is backed up well. The VMs are mostly relying on Xen snapshots and RAID 5.

[–] wpuckering@lm.williampuckering.com 3 points 1 year ago (4 children)

I run all of my services in containers, and intentionally leave my Docker host as barebones as possible so that it's disposable (I don't backup anything aside from data to do with the services themselves, the host can be launched into the sun without any backups and it wouldn't matter). I like to keep things simple yet practical, so I just run a nightly cron job that spins down all my stacks, creates archives of everything as-is at that time, and uploads them to Wasabi, AWS S3, and Backblaze B2. Then everything just spins back up, rinse and repeat the next night. I use lifecycle policies to keep the last 90 days worth of backups.

load more comments (4 replies)
[–] matt@matts.digital 3 points 1 year ago (1 children)

All my servers use ZFS for data storage, they have VPN's between each other (just /30 P2P's). I use zfs-snapshot to take snapshots every 15 minutes and nightly jobs that do a ZFS send to dump everything to another machine with some storage.

load more comments (1 replies)
[–] ipkpjersi@lemmy.one 3 points 1 year ago* (last edited 1 year ago) (1 children)

I usually write my own scripts with rsync for backups since I already have my OS installs pretty much automated also with scripts.

load more comments (1 replies)
[–] skimdankish2@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

I use....

  • Timeshift ->Local backup on to my RAID array
  • borgbackup -> borgbase online backup
  • GlusterFS -> experimenting with replicating certain apps across 2 raspberry pi's
[–] 0xpr03@feddit.de 3 points 1 year ago* (last edited 1 year ago)

Daily offsite to a backup server via restic (+ a self written wrapper for multiple targets). Restic can also run with anything else (sftp, s3 APIs etc). Kinda modern duplicity / borg. Full encrypted and incremental.

[–] mikehunt@lemmy.world 3 points 1 year ago

Backup everything locally in proxmox on separate storage, another copy to a local nas and a third one to backblazes cloud storage.

[–] Bright5park@lemmy.world 3 points 1 year ago

I use RSnapshot and make incremental backups to an external harddrive, and (I know it's not a backup) run my two RAIDs (one for media, one for general data) in mirrored mode.

When I eventually upgrade my home server, I will upgrade from 2x2 2TB drives in RAID1 to four 8TB drives in either RAID5 or 6 - I am still undecided if I am willing to sacrifice 4TB of capacity to the redundancy gods and get an extra harddrive that can fail without data loss in return.

I'm paying Google for their enterprise gSuite which is still "unlimited", and using rclone's encrypted drive target to back up everything. Have a couple of scripts that make tarballs of each service's files, and do a full backup daily.

It's probably excessive, but nobody was ever mad about the fact they had too many backups if they needed them, so whatever.

[–] Faceman2K23@discuss.tchncs.de 3 points 1 year ago

I back up everything to my home server... then I run out of money and cross my fingers that it doesn't fail.

Honestly though my important data is backed up on a couple of places, including a cloud service. 90% of my data is replaceable, so the 10% is easy to keep safe.

[–] JASN_DE@feddit.de 3 points 1 year ago

A kind of "extended" 3-2-1, more a 4-3-2. As nearly everything I host runs on Docker, I usually pause the stack, .tar.bz everything and back that up on several devices (NAS, off-site machine, external HDD).

The neat thing about keeping every database in its own container is the resulting backup "package", which can easily be restored as a whole without having to mess with db dumps, permissions, etc.

[–] XpeeN@sopuli.xyz 2 points 1 year ago

Nextcloud with folder sync for both mobile and PC, backs up everything I need.

[–] ruud@lemmy.world 2 points 1 year ago

I use borgbackup

[–] Totendax@feddit.de 2 points 1 year ago

I backup an encrypted and heavily compressed archive to my local nas and to google drive every night. NAS keeps the version from the first of every month and 7 days prior history and google drive just the latest

[–] ipipip@iusearchlinux.fyi 2 points 1 year ago (1 children)

I don't backup my personal files since they are all more or less contained in Proton Drive. I do run a handful of small databases, which i back up to ... telegram.

[–] ilikedatsyuk@lemmy.world 2 points 1 year ago (1 children)

Ah, yes, the ole' "backup a database to telegram" trick. Who hasn't used that one?!?

[–] trashographer@vlemmy.net 2 points 1 year ago (1 children)

I did. Split pgp tarball into 2gb files and download 600gb to saved messages

load more comments (1 replies)
[–] vivia@sh.itjust.works 2 points 1 year ago (2 children)

For my server I use duplicity, with a daily incremental backup and sending the encrypted diffs away. I researched a few more options some time ago but nothing really fit my use case, but I'm also not super happy with duplicity. Thanks for suggesting borgbackup.

For my personal data I have a NextCloud on a RPi4 at my parents' place, which also syncs between my laptop that I've left there. For an offline and off-site storage, I use the good old strategy where I bring over an external hard drive, rsync it, and bring it back.

[–] kat@feddit.nl 3 points 1 year ago

No problem! I also see Restic a lot in this thread, so I'll probably try both at some point

load more comments (1 replies)
[–] jon@lemmy.tf 2 points 1 year ago

Got a Veeam community instance running on each of my VMware nodes, backing up 9-10 VMs each.

Using Cloudberry for my desktop, laptop and a couple Windows VMs.

Borg for non-VMware Linux servers/VMs, including my WSL instances, game/AI baremetal rig, and some Proxmox VMs I've got hosted with a friend.

Each backup agent dumps its backups into a share on my nas, which then has a cron task to do weekly uploads to GDrive. I also manually do a monthly copy to an HDD and store it off-site with a friend.

load more comments
view more: next ›