this post was submitted on 09 Nov 2023
3 points (100.0% liked)

Self-Hosted Main

21 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

For Example

We welcome posts that include suggestions for good self-hosted alternatives to popular online services, how they are better, or how they give back control of your data. Also include hints and tips for less technical readers.

Useful Lists

founded 1 year ago
MODERATORS
 

Hi all,

as I'm running a lot of docker containers in my "self-hosted cloud", I'm also a little bit worried about getting malicious docker containers at some points. And I'm not a dev, so very limited capabilities to inspect the source code myself.

Not every docker container is a "nextcloud" image with hundred of active contributors and many eyes looking at the source code. Many Self-Hosted projects are quite small, and Github accounts can be hacked, etc. ...

What I'm doing in the moment, is:

Project selection:
- only select docker projects with high community activity on GitHub and a good track record

Docker networks:
- use separate isolated networks for every container without internet access
- if certain APIs need internet access (e.g. Geolocation data), I use an NGINX-proxy to forward this domain only (e.g. self-made outgoing application firewall)

Multiple LXC containers:
- I split my docker containers into multiple LXC instances via Proxmox, some senitive containers like Bitwarden are running on their own LXC instance

Watchtower:
- no automatic updates, but manual updates once per month and testing afterwards

Any other tips? Or am I worrying too much? ;)

top 6 comments
sorted by: hot top controversial new old
[–] ck_@discuss.tchncs.de 2 points 1 year ago

You are not worried too much.

Docker containers are notoriously riddled with outdated, security issue loaded content. Even reputable creators (eg. Nextcloud) only really bother with their own part of the container, but rarely release new builds of their containers when system dependencies could get updated, even less so for base images they depend on. So yes, Docker containers should always be run in a very secure environment, and doing so is by no means trivial, given that docker itself runs as root. Best advice, if you can: don't run Docker containers if you don't really have to, don't run docker containers if you are not sure what you are getting into.

[–] dcabines@alien.top 2 points 1 year ago

I like to prefer images from a known source like linuxserver.io.

[–] nukacola2022@alien.top 2 points 1 year ago

Since you are using LXC/LXD, make sure that AppArmor is enabled on the host and ensure that a configuration profile exists (should be a decent default one available) that blocks the containers from reading things like the /etc/passwd file.

I personally run all containers in centos/alma/fedora systems specifically to take advantage of the strong SELinux-container policies.

Other things you can do would be to rebuild public images, patch them, and save them to your private registry. I find that not all container maintainers patch as aggressively as I would like. Furthermore, you can look into running containers as non root and use a non root “daemon” like Podman instead of Docker.

[–] WiseCookie69@alien.top 1 points 1 year ago

Granted I use Kubernetes, but here you go:

  • I run stuff with user namespaces, so even a root process within the container is unprivileged on the host
  • I isolate namespaces via NetworkPolicies
    • Even my Nextcloud instance has no business to check upstream for updates (i have renovate for that)
  • I use securityContexts to make my containers as unprivileged as possible
    • drop all capabilities
    • enforce a read-only container filesystem
    • enforce running as a specific UID/GID (many maintainers are lazy and just run their stuff as root)

It's funny how as a self-hoster with no open ports, sort of supply chain attacks are almost my biggest worry... Here's the tidbits I've collected so far, but just getting into this so take it with a grain of salt ...

  1. working out how to run my containers as non-root... Most support this already. It's adding a user:UID:GID in the compose file and making sure that user can read and write to any dirs you want to map, and it's done. Now whatever runs in the container does not have root and less chance of shenanigans in its container and on the host.
    Some smaller projects, you have to tweak or rebuild.*
  2. If I can manage I'll also run the docker daemon as rootless as the next milestone. I already had this working on Proxmox Ubuntu VM, but could not get it to work on a netcup VPS, for example.
  3. Docker sock proxy
  4. VLANs
  5. in compose files, if the containers can handle it:
    security_opt:
    - no-new-privileges:true
    cap_drop:
    - ALL
  6. (I have to work out the secrets stuff! secrets in files, ansible vault,...)

(* One example for non-rootifying a docker, I got tempo running as non root the other night as it is based on a nginx alpine linux image, after a while I found a nginx.conf file online where all the dirs are redirected to /tmp so nginx can still run if a non-root user launches it. Mapped that config file to the one in the container, set it to run as my user and it works. Did not even have to rebuild it.)

Depends with your security priorities and if you trust the software you plan on using. Securing software/docker containers can be as deep deep a rabbit hole as you willing to go.