this post was submitted on 13 Jul 2023
55 points (100.0% liked)

Technology

37735 readers
45 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

This webpage provides instructions for using the acme-dns DNS challenge method with various ACME clients to obtain HTTPS certificates for private networks. Caddy, Traefik, cert-manager, acme.sh, LEGO and Certify The Web are listed as ACME clients that support acme-dns. For each client, configuration examples are provided that show how to set API credentials and other settings to use the acme-dns service at https://api.getlocalcert.net/api/v1/acme-dns-compat to obtain certificates. Interesting that so many ACME clients support the acme-dns service, providing an easy way to obtain HTTPS certificates for private networks.

HN https://news.ycombinator.com/item?id=36674224

seiferteric: Proposes an idea for automatically creating trusted certificates for new devices on a private network.

hartmel: Mentions SCEP which allows automatic certificate enrollment for network devices.

mananaysiempre: Thinks using EJBCA for this, as hartmel suggested, adds unnecessary complexity.

8organicbits: Describes a solution using getlocalcert which issues certificates for anonymous domain names.

austin-cheney: Has a solution using TypeScript that checks for existing certificates and creates them if needed, installing them in the OS and browser.

bruce511: Says automating the process is possible.

lolinder: Mentions Caddy will automatically create and manage certificates for local domains.

frfl: Uses Lego to get a Let's Encrypt certificate for a local network website using the DNS challenge.

donselaar: Recommends DANE which works well for private networks without a public CA, but lacks browser support.

top 27 comments
sorted by: hot top controversial new old
[–] thedaly@reseed.it 7 points 1 year ago (3 children)

Big fan of letsencrypt’s certbot with the nginx and cloudflare (or other dns providers) plugins.

Is there any reason to use caddy or traefik over nginx?

[–] lchapman@programming.dev 7 points 1 year ago (1 children)

Caddy takes almost all of the nginx boilerplate and handles it for you.

If you’re doing something simple in nginx, it’s far simpler with Caddy.

[–] robotrash@lemmy.robotra.sh 6 points 1 year ago (2 children)

What if I'm using NGINX Proxy Manager which gives me a GUI for my dumbness?

[–] lchapman@programming.dev 4 points 1 year ago

Stick with it, sounds like you’ve got a system that works for you

[–] AES@lemmy.ronsmans.eu 1 points 1 year ago
[–] LedgeDrop@lemm.ee 4 points 1 year ago

I found traefik to be a more feature rich, load balancer when used in kubernetes environments. Other than use in kubernetes, I'd say if you're happy with nginx, keep using nginx :)

[–] steltek@lemm.ee 1 points 1 year ago

I haven't tried it yet but I vaguely recall traefik had a better proxy-auth setup while nginx locked it away behind their freemium plan.

[–] GameGod 6 points 1 year ago (1 children)

IMHO all these approaches are convoluted and introduce way too many components (SPOFs) to solve the problem. They're "free" but they come at the cost of maintaining all this extra infrastructure and don't forget that certificate transparency logs mean all your internal DNS records that you request a LetsEncrypt certificate for will be published publicly. (!)

An alternative approach is to set up your own internal certificate authority (CA), which you can do in a couple minutes with step-ca. You then just deploy your CA root cert to all the machines on your network and can get certs whenever you need. If you want to go the extra mile and set up automatic renewal, you can do that too, but it's overkill for internal use IMHO.

Using your own CA introduces only a single new software component and it doesn't require high availability to be useful....

[–] abhibeckert 1 points 1 year ago (1 children)

Unfortunately these days internal CAs aren't always trusted. We have one where I work, and hundreds of times a day people have to click through "I understand the risks, proceed anyway" alert prompts.

Which makes me really uncomfortable - I fear one day someone will blindly click past a warning about an actual malicious certificate.

[–] TemporalSoup 1 points 1 year ago

It kills me that companies seem to willingly train their users to ignore warnings and signs that something is amiss.

"Yeah, all our emails from that vendor come with the external email warning, just ignore it"

[–] zergling_man@lemmy.perthchat.org 5 points 1 year ago (1 children)
[–] dan@upvote.au 14 points 1 year ago (1 children)

Because you might want to use HTTPS on a server that's not accessible externally. Some browser features only work over HTTPS.

[–] zergling_man@lemmy.perthchat.org 1 points 1 year ago (4 children)

Sounds like a bad browser.

[–] jarfil 14 points 1 year ago* (last edited 1 year ago) (1 children)

Good browsers don't let random unauthenticated content to do whatever it wants on neither the local machine or the network.

HTTPS is also the only way to use client-side certificates for strong two-way authentication and zero-trust setups.

[–] zergling_man@lemmy.perthchat.org 2 points 1 year ago (1 children)

Good browsers don't let random unauthenticated content to do whatever it wants on neither the local machine or the network.

So, lynx?

zero-trust setups. private networks

[–] jarfil 4 points 1 year ago (1 children)

lynx, no-script... it's all fine until some web needs JavaScript yes or yes, which nowadays seem to be most of them, then it's a game of whom to trust.

Private networks are usually an oxymoron, they're only as private as far as the WiFi router or whoever clicks the wrong malicious link go. Zero-trust mitigates that, instead of blindly relying on perimeter defenses and trusting anyone who manages to bypass them.

[–] zergling_man@lemmy.perthchat.org 1 points 1 year ago (1 children)

This is your brain on webshit.

[–] jarfil 1 points 1 year ago

You may want to rephrase that?

[–] dan@upvote.au 5 points 1 year ago (1 children)

Every browser implements these limitations, as they're part of the web platform. Some examples are service workers, web crypto, HTTP/2, webcam, microphone, geolocation, and more. There's a list here: https://developer.mozilla.org/en-US/docs/Web/Security/Secure_Contexts/features_restricted_to_secure_contexts

[–] zergling_man@lemmy.perthchat.org 1 points 1 year ago (2 children)

Sounds like a bad browser.

[–] dan@upvote.au 2 points 1 year ago (1 children)

Every browser does this. It's intentional to push people towards using encrypted connections, especially for PII like geolocation.

Sounds dystopian. I still won't feel bad for normies.

[–] abhibeckert 1 points 1 year ago

So, Chrome, FireFox, Edge, Safari, Opera, every other browser I've ever heard of, are all "bad browsers" in your opinion?

[–] abhibeckert 1 points 1 year ago* (last edited 1 year ago)

For example, my browser won't auto-fill a credit card without a valid HTTPS connection. And as someone who does QA on payment pages, I find myself typing out the standard VISA test card number 4200 0000 0000 0000[tab]12/34[tab]123 about a thousand times a day. Every ten minutes or so I type the wrong number of zeros and have to go back and try again. With a working HTTPS connection, the browser will fill it out for me. So much better.

[–] upstream 1 points 1 year ago

Plenty of non-browser related reasons to want HTTPS in your own network.

If you need it/should use it depends on your system architecture and level of paranoia.

For instance we’re running all our stuff in a virtualized Linux environment on-premise on our own hardware. There’s a firewall zone from the outside and in, several zones for different applications.

We terminate SSL at the edge and use port 80 for anything internal that’s HTTP.

While that opens us up to internal eavesdropping my argument is that anyone that deep in our system will have compromised everything anyways.

On the other hand it allows our firewall to do application filtering, including killing bad (as in faith) incoming requests.

The only caveat to that is that some of our external pen-testers think they’ve found a DOS scenario in our application when all that happens is that the firewall drops the connection.

If I was routing traffic over a shared network or multiple sites I’d definitely employ HTTPS.

All this said, I’m sure someone smarter than me have written better opinions on the topic.

[–] pe1uca@lemmy.pe1uca.dev 4 points 1 year ago

Yep, caddy was as easy as to use xcaddy with the module of my DNS, configure the key and run caddy, that's it xD.

For what lolinder mentioned in the news link you need to have port 80 open.
If you don't want that you could configure local authority, but that'll give the warning of a selfsigned certificate.

[–] xthexder@l.sw0.com 4 points 1 year ago

Personally I use dnsrobocert with my own domains. I've got a few subdomains that point to a Wireguard subnet IP for private network apps (so it resolves to nothing if you're not on VPN). Having a real valid SSL cert is really nice vs self signing, and it keeps my browser with HTTPS-Everywhere happy.