Big fan of letsencrypt’s certbot with the nginx and cloudflare (or other dns providers) plugins.
Is there any reason to use caddy or traefik over nginx?
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Big fan of letsencrypt’s certbot with the nginx and cloudflare (or other dns providers) plugins.
Is there any reason to use caddy or traefik over nginx?
Caddy takes almost all of the nginx boilerplate and handles it for you.
If you’re doing something simple in nginx, it’s far simpler with Caddy.
What if I'm using NGINX Proxy Manager which gives me a GUI for my dumbness?
Stick with it, sounds like you’ve got a system that works for you
You win
I found traefik to be a more feature rich, load balancer when used in kubernetes environments. Other than use in kubernetes, I'd say if you're happy with nginx, keep using nginx :)
I haven't tried it yet but I vaguely recall traefik had a better proxy-auth setup while nginx locked it away behind their freemium plan.
IMHO all these approaches are convoluted and introduce way too many components (SPOFs) to solve the problem. They're "free" but they come at the cost of maintaining all this extra infrastructure and don't forget that certificate transparency logs mean all your internal DNS records that you request a LetsEncrypt certificate for will be published publicly. (!)
An alternative approach is to set up your own internal certificate authority (CA), which you can do in a couple minutes with step-ca. You then just deploy your CA root cert to all the machines on your network and can get certs whenever you need. If you want to go the extra mile and set up automatic renewal, you can do that too, but it's overkill for internal use IMHO.
Using your own CA introduces only a single new software component and it doesn't require high availability to be useful....
Unfortunately these days internal CAs aren't always trusted. We have one where I work, and hundreds of times a day people have to click through "I understand the risks, proceed anyway" alert prompts.
Which makes me really uncomfortable - I fear one day someone will blindly click past a warning about an actual malicious certificate.
It kills me that companies seem to willingly train their users to ignore warnings and signs that something is amiss.
"Yeah, all our emails from that vendor come with the external email warning, just ignore it"
But why
Because you might want to use HTTPS on a server that's not accessible externally. Some browser features only work over HTTPS.
Sounds like a bad browser.
Good browsers don't let random unauthenticated content to do whatever it wants on neither the local machine or the network.
HTTPS is also the only way to use client-side certificates for strong two-way authentication and zero-trust setups.
Good browsers don't let random unauthenticated content to do whatever it wants on neither the local machine or the network.
So, lynx?
zero-trust setups. private networks
lynx, no-script... it's all fine until some web needs JavaScript yes or yes, which nowadays seem to be most of them, then it's a game of whom to trust.
Private networks are usually an oxymoron, they're only as private as far as the WiFi router or whoever clicks the wrong malicious link go. Zero-trust mitigates that, instead of blindly relying on perimeter defenses and trusting anyone who manages to bypass them.
This is your brain on webshit.
You may want to rephrase that?
Every browser implements these limitations, as they're part of the web platform. Some examples are service workers, web crypto, HTTP/2, webcam, microphone, geolocation, and more. There's a list here: https://developer.mozilla.org/en-US/docs/Web/Security/Secure_Contexts/features_restricted_to_secure_contexts
Sounds like a bad browser.
Every browser does this. It's intentional to push people towards using encrypted connections, especially for PII like geolocation.
Sounds dystopian. I still won't feel bad for normies.
So, Chrome, FireFox, Edge, Safari, Opera, every other browser I've ever heard of, are all "bad browsers" in your opinion?
For example, my browser won't auto-fill a credit card without a valid HTTPS connection. And as someone who does QA on payment pages, I find myself typing out the standard VISA test card number 4200 0000 0000 0000[tab]12/34[tab]123
about a thousand times a day. Every ten minutes or so I type the wrong number of zeros and have to go back and try again. With a working HTTPS connection, the browser will fill it out for me. So much better.
Plenty of non-browser related reasons to want HTTPS in your own network.
If you need it/should use it depends on your system architecture and level of paranoia.
For instance we’re running all our stuff in a virtualized Linux environment on-premise on our own hardware. There’s a firewall zone from the outside and in, several zones for different applications.
We terminate SSL at the edge and use port 80 for anything internal that’s HTTP.
While that opens us up to internal eavesdropping my argument is that anyone that deep in our system will have compromised everything anyways.
On the other hand it allows our firewall to do application filtering, including killing bad (as in faith) incoming requests.
The only caveat to that is that some of our external pen-testers think they’ve found a DOS scenario in our application when all that happens is that the firewall drops the connection.
If I was routing traffic over a shared network or multiple sites I’d definitely employ HTTPS.
All this said, I’m sure someone smarter than me have written better opinions on the topic.
Yep, caddy was as easy as to use xcaddy with the module of my DNS, configure the key and run caddy, that's it xD.
For what lolinder mentioned in the news link you need to have port 80 open.
If you don't want that you could configure local authority, but that'll give the warning of a selfsigned certificate.
Personally I use dnsrobocert with my own domains. I've got a few subdomains that point to a Wireguard subnet IP for private network apps (so it resolves to nothing if you're not on VPN). Having a real valid SSL cert is really nice vs self signing, and it keeps my browser with HTTPS-Everywhere happy.