this post was submitted on 01 Nov 2023
1 points (100.0% liked)

Self-Hosted Main

21 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

For Example

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.

Useful Lists

founded 1 year ago
MODERATORS
 

I am trying to setup a restic job to backup my docker stacks, and with half of everything owned by root it becomes problematic. I've been wanting to look at podman so everything isn't owned by root, but for now I want to backup my work I built.

Also, how do you deal with some docker containers having databases. Do you have to create exports for all docker containers that have some form of database?

I've spent the last few days moving all my docker containers to a dedicated machine. I was using a mix of NFS and local storage before, but now I am doing everything on local NVME. My original plan was having everything on NFS so I would worry about backups there, and I might go back to that.

top 27 comments
sorted by: hot top controversial new old
[–] nyrosis@alien.top 1 points 1 year ago

ZFS snapshots combined with replication to another box. That and a cronjob on packaging up my compose/config files.

[–] VirtualDenzel@alien.top 1 points 1 year ago

I build my own docker images. All my images are build to run as set id / guid when specified in ansible.

This way only my servicedaemon can do stuff. Also makes sure i never have issues with borgbackup etc.

[–] linxbro5000@alien.top 1 points 1 year ago (3 children)

I have a backup-script (running as root) that

  • stops all containers
  • runs the restic-backup
  • starts all containers
[–] mirisbowring@alien.top 1 points 1 year ago (1 children)

I had this before but this created struggles with some containers since they do start specific checks and scans during startup which resulted in high cpu and disk load.

Since unraid supports zfs, i am using this for the docker stuff and do snapshots to external disk as backup

no need to stop containers anymorw

[–] Bonsailinse@alien.top 1 points 1 year ago

If you work with databases it’s still safer to stop incoming data for the time of the backup. I don’t know why a higher CPU load would be a problem, those checks don’t run long or do so much your system would be under much stress. Do your backups at 3am if you still think the minute of highe load would cause any problems.

[–] karitchi@alien.top 1 points 1 year ago

A simpler method would be to stop/start the Docker daemon instead of containers, it works smoothly.

[–] root-node@alien.top 1 points 1 year ago

Have a look at https://github.com/minituff/nautical-backup, it does a similar thing

[–] ElevenNotes@alien.top 1 points 1 year ago

Don't run docker as root, don't run containers as root, pretty simple to be honest.

[–] esturniolo@alien.top 1 points 1 year ago (1 children)

KISS method: Script that copy the data on the fly to the /tmp dir, compress it, encrypt it and move it to destination using rclone. Running every hour, 4 hours or 24 hours, depending the container.

Never fails. The backups nor the restore.

[–] atheken@alien.top 1 points 1 year ago

I mean, snapshotting and piping it to an rclone mount is arguably simpler than trying to do your own ad hoc file syncronization, also does not require 2x the storage space.

[–] rrrmmmrrrmmm@alien.top 1 points 1 year ago

Can't you run a restic container where you mount everything? If the restic container is insecure, everything is of course.

But yes, I also migrated to rootless Podman for this reason and a bunch of others.

[–] RydRychards@alien.top 1 points 1 year ago

I used to let root start the backup job

[–] nobackup42@alien.top 1 points 1 year ago

I converted they to use Podman.

[–] cbunn81@alien.top 1 points 1 year ago

This is one reason I prefer FreeBSD jails. They are each in a separate ZFS filesystem, with a separate filesystem for configuration files. So all I have to do is regular snapshots and send those to a backup pool.

[–] SamSausages@alien.top 1 points 1 year ago

I do this at the file system level, not the file level, using zfs.

Unless the container has a database, I use zfs snapshots. If it has a database, my script dumps the database first and then does a ZFS snapshot. Then that snapshot is sent via sanoid to a zfs disk that is in a different backup pool.

This is a block level backup, so it only backs up the actual data blocks that changed.

[–] Zeal514@alien.top 1 points 1 year ago

First, I try not to have it owned by root. But some containers have special privileges that need to be followed.

So rsync -O will copy the directory retaining permissions and ownership of all files.

[–] PaulEngineer-89@alien.top 1 points 1 year ago

Don’t backup the container!!

Map volumes with your data to physical storage and then simply backup those folders with the rest of your data. Docker containers are already either backed up in your development directory (if you wrote them) or GitHub so like the operating system itself, no need tk backup anything. The whole idea of Docker is the containers are ephemeral. They are reset at every reboot.

[–] root-node@alien.top 1 points 1 year ago (2 children)

For backups I use Nautical Backup.

For the "owned by root" problem, I ensure all my docker compose files have [P]UID and [P]GID set to 1000 (the user my docker runs under). All my 20 containers have no issue running like this.

How are you launching your containers? Docker compose is the way, I have set the following in all mine:

environment:
  - PUID=1000
  - PGID=1000

user:
  1000:1000
[–] human_with_humanity@alien.top 1 points 1 year ago

Do u add bothe the user and env variables or just one?

[–] human_with_humanity@alien.top 1 points 1 year ago (1 children)

Do u add bothe the user and env variables or just one?

[–] root-node@alien.top 1 points 1 year ago

I add both because why not. It doesn't hurt.

[–] MoneyVirus@alien.top 1 points 1 year ago

only backup of the data i need to backup (mapped volumes).

Restore: create fresh container and map volumes again.

[–] Do_TheEvolution@alien.top 1 points 1 year ago

From my basic selfhosted experience... I run kopia as root , my shit uses bind mounts so all I care about is in that directory.

And so far it works fine, to just down old, rename the directory, copy from nightly backup back the directory and start container.

But yeah if there is something I care about I schedule database dumps like here in bookstack or vaultwarden..

To have something more if shit would not work start.

[–] trisanachandler@alien.top 1 points 1 year ago

Try to not run containers as root?

[–] katbyte@alien.top 1 points 1 year ago

i don't, i created a docker VM (and a couple others) and then i backup the VMs (proxmox + PBS make this very easy) with all their data in /home/docker/config/*

i used to have them run off networked storage but i found it to be to slow/have other issues

this also means for the primary important services that VM runs in HA and moves to another node when needed

[–] TheSmashy@alien.top 1 points 1 year ago

sudo crontab -e

[–] McGregorMX@alien.top 1 points 1 year ago

I have my config and data volumes mounted to a share on truenas, that share replicates its snapshots to another truenas server. This is likely not ideal for everyone, but it works for me. My friend that also uses docker has it backed up with duplicati.