this post was submitted on 22 Nov 2023
3 points (100.0% liked)

Homelab

22 readers
1 users here now

Rules

founded 1 year ago
MODERATORS
 

Started off by

  1. Enabling unattended updates
  2. Enable only ssh login with key
  3. Create user with sudo privileges
  4. Disable root login
  5. Enable ufw with necessary ports
  6. Disable ping
  7. Change ssh default port 21 to something else.

Got the ideas from networkchuck

Did this on the proxmox host as well as all VMs.

Any suggestions?

top 50 comments
sorted by: hot top controversial new old
[–] Zerafiall@alien.top 3 points 1 year ago (2 children)
  1. Don’t bother with disabling icmp. You’ll use it way more then it’s worth disabling, and something like nmap -Pn -p- X.X.X.0/24 will find all your servers anyways (same can be said for ssh and port 22. But moving that does stop some bots)

  2. As long as i go out not exposing anything the the global internet, you really don’t need a lot. The fire wall should already deny all inbound traffic.

The next step is monitoring. It's one thing to think your stuff is safe and locked down. It's another thing to know your stuff is safe. Something like Observium, Nagios, Zabbix, or otherwise is a great way to make sure everything stays up, as well as having insights into what everything it doing. Even Uptime Kuma is a good test. Then something like Wazuh to watch for security events and OpenVAS or Nessus, to look holes. I'd even though in CrowdSec for host based virus detection. (Warning, this will quickly send you down the rabbit hole of being a SOC analyst for your own home)

[–] Internet-of-cruft@alien.top 2 points 1 year ago

Block outbound traffic too.

Open up just what you need.

Segment internally and restrict access. You don't need more than SSH to a Linux Server, or perhaps to it's web interface for an application running on it.

load more comments (1 replies)
[–] jmartin72@alien.top 2 points 1 year ago (1 children)

Don't expose anything to the outside world. If you do, use something like Cloudflare tunnels or Tailscale.

[–] umbrella@lemmy.ml 2 points 1 year ago* (last edited 1 year ago)

Or host a VPN on it and get in through that. Many of these microservices are insecure, and the real risk comes from opening them up to the Internet. This is important.

Also set permissions properly if applicable

[–] avdept@alien.top 1 points 1 year ago

If your homelab local only - well all of these are unnecessary if you're the only one who uses it. If you want to expose homelab to internet - you can pretty much use VPN to connect to your homelab without needing to expose whole homelab. Just a port to connect to VPN.

Do not over complicate things

[–] PolicyArtistic8545@alien.top 1 points 1 year ago

Automatic updates and strong passwords. I know that automatic update can break a system but I’ve never had it break anything super critical in my home before that can’t be fixed with 10 minutes of effort. I can think of three things that have broken and required fixing in the last 5 years of auto updating software. I’d much rather have a broke piece of software than a security breach. To those that manually update, how fast after the patch notice are you patching? One day, two days, one week, monthly? What if you are sick or on vacation? I can guarantee mine updates within 24 hours every time.

[–] LAKnerd@alien.top 1 points 1 year ago

Air gapped, no Internet access. I don't use Internet services for any of my stuff though so I can get away without direct Internet access

[–] darthrater78@alien.top 1 points 1 year ago

By only having it on when I need it.

People that have theirs on 24/7....why? I used Home Assistant to automate mine so I can bring it up remotely if needed.

Don't worry about it, no one wants to hack your plex server xD just don't expose things directly to the internet and you'll be fine.

[–] jjaAK3eG@alien.top 1 points 1 year ago

Hosted reverse proxy and VPN servers. I have no open ports on my home network.

[–] blentdragoons@alien.top 1 points 1 year ago (2 children)

automatic updates is a great strategy for breaking the system

[–] 100GHz@alien.top 1 points 1 year ago

Some would argue that not having them, is a great strategy for breaking in the system :P

[–] SirLagz@alien.top 1 points 1 year ago (1 children)

Automatic backups are great for recovering from broken updates lol

[–] blentdragoons@alien.top 1 points 1 year ago

agreed. i do daily backups for everything to s3

[–] FluffyBunny-6546@alien.top 1 points 1 year ago

Armed guards at every entrance.

[–] theRealNilz02@alien.top 1 points 1 year ago (1 children)

Unattended updates are a recipe for trouble. I'd never enable that.

I have no public services apart from 2 OpenVPN servers. To access everything else I connect to one of the OpenVPNs and use the services through the VPN routings.

The VPN can only be accessed if you possess a cert and key. I could even implement 2fa but for now SSL auth works securely enough.

[–] phein4242@alien.top 1 points 1 year ago

I run unattended-upgrades on all the debian/ubuntu deployments I manage. One of the deployments even has automatic reboots enabled. I still do major upgrades by hand/terraform, but the process itself works flawless in my experience.

[–] lunakoa@alien.top 1 points 1 year ago

My home lab and production network are separated by a firewall.

I have backups and plans to rebuild my lab, I actually do it regularly.

My labs do risky things, I get comfortable with those things before doing it in production.

[–] RayneYoruka@alien.top 1 points 1 year ago

Filter incoming traffic from countries with malicious attacks :)

[–] gargravarr2112@alien.top 1 points 1 year ago (1 children)
  1. Domain auth (1 place to set passwords and SSH keys), no root SSH
  2. SSH by key only
  3. Passworded sudo (last line of defence)
  4. Only open firewall hole is OpenVPN with security dialled up high
  5. VLANs - laptops segregated from servers
  6. Strict firewall rules between VLANs
  7. TLS on everything
  8. Daily update check alerts (no automatic updates, but persists until I deal with them)
  9. Separate isolated syslog server for audit trails
  10. Cold backups
[–] mxrider108@alien.top 1 points 1 year ago (1 children)

What are the risks of passwordless sudo? Is it mainly just if someone has physical access to the machine or if you run a malicious program?

load more comments (1 replies)
[–] wallacebrf@alien.top 1 points 1 year ago (9 children)
  1. strict 3-2-1 backup policy
  2. VLANs. all VLANs are controlled by my Fortigate FWF-61E (soon to be replaced by a FG-91G). the VLANs have strict access permissions on a per-device basis on what they can and cannot access.
    1. CORE network where the NAS live
      1. only specific devices can access this VLAN, and most only have access to the SMB ports for data access. even fewer devices have access to the NAS management ports
      2. this network has restrictions on how is accesses the internet
      3. I have strict IPS, web-filtering, DNS filtering, network level fortigate AV, deep SSL inspection, and intrusion protection activities
      4. everything is logged, any and all incoming and outgoing connections both to/from the internet but also any LAN based local communications.
    2. Guest wifi
      1. can ONLY access the internet
      2. has very restrictive web and DNS filtering
      3. I have strict IPS, web-filtering, DNS filtering, network level fortigate AV, basic SSL inspection, and intrusion protection activities
    3. APC Network Management Cards
      1. can ONLY access my SMTP2GO email client so it can send email notifications
      2. it does have some access to the CORE network (NTP, SYSLOG, SNMP)
      3. very select few devices can access the management ports of these cards
      4. I have strict IPS, web-filtering, DNS filtering, network level fortigate AV, basic SSL inspection, and intrusion protection activities
    4. Ethernet Switch / WIFI-AP management
        1. very select few devices can access the management ports of the switches
      1. ZERO internet access allowed
    5. ROKUs
      1. restrictive web and DNS filtering to prevent ads and tracking. Love seeing the space where ads SHOULD be and seeing a blank box.
      2. can access ONLY the IP of my PLEX server on the CORE network, on ONLY the PLEX port for the services PLEX requires.
    6. IoT devices
      1. Internet access ONLY except for a few devices like my IoTaWatt that needs CORE network access to my NAS on ONLY the port required for InfluxDB logging.
    7. Wife's computer
      1. because of HIPPA due to her job, i have ZERO logging, and no SSL inspection, but do have some web and DNS filtering.
    8. print server
      1. zero internet access, and only the machines that need to print can access.
  3. as already indicated i have a fortigate router which has next generation firewall abilities to protect my network
  4. while i do not have automatic updates i am notified when updates are available for my router, my NAS, the switches, and APC network cards. i always like to look at the release notes and ensure there are no known issues that can negatively impact my operations. I do have most of my docker containers auto-update using watchtower.
  5. i keep SSH disabled and only enable when i ACTUALLY need it, and when i do, i use certificate based authentication
  6. i have disabled the default admin account on ALL devices and made custom admin/root users but also have "normal" users and use those normal users for everything UNLESS i need to perform some kind of activity that requires root/admin rights.
  7. on all devices that have their own internal firewall, i have enabled it to only allow access from VLAN subnets that i allow, and go even further by restricting which IPs on those VLANS can access the device
  8. changing default ports is fairly useless in my opinion as once someone is on your network it is trivial to perform a port scan and find the new ports.
  9. all windows based endpoint machines
    1. have a strict endpoint control using fortigate's fortiguard software with EMS server. this allows me to enforce that machines have minimum specifications,
    2. i use group policy to enforce restrictive user environments to prevent installation of programs, making system changes, accessing the C: drive etc as this prevents a decent amount of malware from executing
    3. antivirus must be enabled and active or the endpoint becomes quarantined.
    4. if the system has unusual behavior it is automatically quarantined and i am notified to take a look
    5. even though the fortigate router blocks all ads and trackers i also use a combination of UBlock Origin to prevent ads and trackers from running in the browser as ADs are now one of the most common points of entry for malware
    6. i use ESET antivirus which also performs and ties into the fortiguard endpoint protection to ensure everything on the machines is OK
  10. for all phones/tablets i have Adguard installed which blocks all ads and malicious web sites and tracking at the phones level

this is not even all of it.

the big take away is i try to layer things. the endpoint devices are most important to protect and monitor as those are the foot hold something needs to then move through the network.

i then use network level protections to secure the remaining portions of the network from other portions of the network.

[–] supercamlabs@alien.top 2 points 1 year ago

Messy...just messy

load more comments (8 replies)
[–] mss-cyclist@alien.top 1 points 1 year ago (1 children)

Unattended updates can be tricky.

Think of config changes which need manual adjustment, or a broken update. This is something you would probably not like to happen at night without notice. Could easily break your vital systems (e.g. homeassistant, authentication, vaults...)

[–] Daniel15@alien.top 1 points 1 year ago

+1

Use unattended updates ONLY for bug and security fixes, nor for minor or major releases. Ensure you configure your auto-updaters properly!

Debian unattended-upgrades only upgrades packages from the main and security repos by default, so it should be fine since no major updates are performed within a particular Debian version.

[–] null_rm-rf@alien.top 1 points 1 year ago

Not forwarding ports. I use Tailscale Funnel.

[–] tabortsenare@alien.top 1 points 1 year ago

Internet > Firewall, IP Whitelist, IPS/IDS yada yada> DMZ / VLAN > > Proxmox /w FW:$true (rule only for game ports) > GameServer > Deny all traffic from GameServer to go anywhere but internet

Proxmox server has firewall, all the hosts on proxmox have firewall enabled (in proxmox). Only allow my main device to access. No VLAN crosstalk permitted.

I don't bother with anything else internally, if they're inside they deserve to SSH with my default root / password credentials

[–] Comfortable-Cause-81@alien.top 1 points 1 year ago (1 children)

ssh default port is 22.

Really, unless I'm trying to learn security (valid), or have something to protect. I do the basic best practices.

Real security is an offline backup.

[–] PreppyAndrew@alien.top 1 points 1 year ago

SSH port really doesnt matter. If it is an exposed SSH port, it will probably get picked up if its 69 or 22.

[–] calinet6@alien.top 1 points 1 year ago

UDM’s regular built in threat filtering, good firewall rules, updated services, and not opening up unnecessarily to the internet. And be vigilant but don’t worry too much about it. That’s it.

[–] Daniel15@alien.top 1 points 1 year ago

If it's a Debian system, "Create user with sudo privileges" and "Disable root login" can be done during initial setup. Just leave the root password blank and it'll disable the root user and grant sudo permission to the regular user you create.

Create a separate management VLAN and use it for all your infra (web UIs of all your networking hardware, Proxmox, SSH for servers, etc).

For unattended upgrades, ensure the auto updaters are properly configured so they're used ONLY for bug and security fixes, nor for minor or major releases! Debian unattended-upgrades has good settings out-of-the-box but you may want to add any custom repos you're using. Make sure you have an email relay server configured in the Exim config, as it uses apt-listchanges to email the changelogs to you.

But above all, press the power button to turn it off and then never turn it on again. 100% unhackable.

Lock and key, shotgun by the door

[–] AdderallBuyersClub2@alien.top 1 points 1 year ago

Change all root usernames and passwords to “toor”

Who is going to guess that? Not me.

[–] _DuranDuran_@alien.top 1 points 1 year ago

My homelab is in my garage - the storage array is the only thing I care about not losing so using ZFS encryption and Clevis + tang so it needs to be on the home network and able to contact the server to get the decryption keys.

[–] u35828@alien.top 1 points 1 year ago

Deny outside access to the core management interfaces. Ne'er-do-wells from the .cn domain trying to hack my router can fuck right off.

[–] billiarddaddy@alien.top 1 points 1 year ago

Non standard ports.

Ssh keys.

Web certificates.

[–] ellie288@alien.top 1 points 1 year ago

Also consider TCP Wrappers (hosts.allow/hosts.deny) and DenyHosts/fail2ban.

[–] WildestPotato@alien.top 1 points 1 year ago

Why has no one mentioned CIS hardening.

Easy, i keep it up to date, i have nothing exposed to the internet, and i lock the door :)

[–] gctaylor@alien.top 1 points 1 year ago

Hopes and prayers

[–] reviewmynotes@alien.top 1 points 1 year ago

You have a good list to start with. Consider adding sshguard or fail2ban in the short term and crowdsec in the long term. Also use lynis on Unix systems and PingCastle on AD systems and see what suggestions those make. Just a few suggestions off the top of my head.

[–] Impressive-Cap1140@alien.top 1 points 1 year ago

Is there really any security benefit to not using default ports? Especially if the service is not open externally? I cannot find any official documentation that states you should be doing that.

[–] WillingLimit3552@alien.top 1 points 1 year ago

Disable root login covers 99.9999 percent of it, as long as your box has only one or two obscure login accounts.

[–] dinosaurdynasty@alien.top 1 points 1 year ago

Honestly I just use a good firewall and forward_auth/authelia in caddy (so authentication happens before any apps) and it works well.

I also don't expose SSH to the public internet anymore (more laziness than anything, have it semi-exposed in yggdrasil and wireguard) (mostly because the SSH logs get annoying for journalctl -f)

[–] Randommaggy@alien.top 1 points 1 year ago

Tailscale, expose nothing to the wider web if not actually needed.

[–] PreppyAndrew@alien.top 1 points 1 year ago

I know this is a feature in Unifi, but disabling access from countries with know bot farms (China, India) etc.
Unless you need access to them.

[–] AnomalyNexus@alien.top 1 points 1 year ago

Opnsense firewall at perimeter...and that's about it. Chances of anything getting in with no exposed ports is pretty slim so I don't really bother with anything more.

For SSH exposed servers/VPS I do change the port though. Cut down log noise & maybe dodge the odd portscanner or two

load more comments
view more: next ›