|
|
|
| Just be mindful that any certs you issue in this way will be public information[1] so make sure the domain names don't give away any interesting facts about your infrastructure or future product ideas.
I did this at my last job as well and I can still see them renewing them, including an unfortunate wildcard cert which wasn't me.
[1] https://crt.sh/ |
|
| It was the latest nginx at the time. I actually found a rather obscure issue on Github that touches on this problem, for those who are curious:
https://github.com/kubernetes/ingress-nginx/issues/1681#issu... > We discovered a related issue where we have multiple ssl-passthrough upstreams that only use different hostnames. [...] nginx-ingress does not inspect the connection after the initial handshake - no matter if the HOST changes. That was 5-ish years ago though. I hope there are better ways than the cert hack now. |
|
| There's a larger risk that if someone breaches a system with a wildcard cert, then you can end up with them being able to impersonate _every_ part of your domain, not just the one application. |
|
| I've considered building tools to manage decoy certificates, like it would register mail.example.com if you didn't have a mail server, but I couldn't justify polluting the cert transparency logs. |
|
| They are certainly different things, but they're not unrelated. The inability of the user to change the system trust store is part of why certificate pinning is no longer (broadly) recommended. |
|
| It's interesting that cert pinning cuts both ways though. It can also be a tool to give users power against the IT department (typically indistinguishable from malware) |
|
| I don’t understand the frustration. The use of .internal is explicitly for when you don’t want a publicly valid domain. Nobody is forcing anyone to use .internal otherwise. |
|
| Do you mean to say that your biggest frustration with HTTPS on .internal is that it requires a private certificate authority? Because I'm running plain HTTP to .internal sites and it works fine. |
|
| Try running anything more complicated than a plain and basic web server! See what happens if you attempt to serve something that browsers deem to require a mandatory "Secure Context", so they will reject running it when using HTTP.
For example, you won't be able to run internal videocalls (no access to webcams!), or a web page able to scan QR codes. Here's the full list: * https://developer.mozilla.org/en-US/docs/Web/Security/Secure... A true hassle for internal testing between hosts, to be honest. I just cannot run an in-development video app on my PC and connect from a phone or laptop to do some testing, without first worrying about certs at a point in development where they are superfluous and a loss of time. |
|
| > One can resolve "localhost" (even via an upstream resolver) to an arbitrary IP address. At least on my Linux system "localhost" only seems to be specially treated by systemd-resolved (with a cursory attempt I didn't succeed in getting it to use an upstream resolver for it).
The secure context spec [1] addresses this-- localhost should only be considered potentially trustworthy if the agent complies with specific name resolution rules to guarantee that it never resolves to anything except the host's loopback interface. [1] https://w3c.github.io/webappsec-secure-contexts/#localhost |
|
| No. The concept of a DMZ died decades ago. You could still be MITM within your company intranet. Any system designed these days should follow zero-trust principles. |
|
| Echelon was known about before Google was even a thing. I remember people adding Usenet headers with certain keywords. Wasn’t much, but it was honest work. |
|
| > Lots of organizations struggle to fully set up trust for the private CA on all internal systems.
Made worse by the fact phone OSes have made it very difficult to install CAs. |
|
| This is why I'm using a FQDN for my home lab, I'm not going to setup a private CA for this, I can just use ACME-dns and get a cert that will work everywhere, for free! |
|
| Presumably you don't trust the CA that signed the certificate on the server at the company you're visiting. As long as you heed the certificate error and don't visits the site, you're fine. |
|
| Now suppose you are a contractor who did some work for company A, then went to do some work for company B, and still have some cookies set from A's internal site. |
|
| IPv6 solves this as you are strongly recommend to use a random component at the top of the internal reserved space. So the chance of a collision is quite low. |
|
| But then you still need a private CA (public one is going to resolve the domain correctly and find you don't control it) so you may as well have used .internal? |
|
| Additionally how do you define publish?
When someone embeds https://test.internal with a cert validation turned off (rather then fingerprint pinning or setting up an internal CA) in their mobile application that client will greedily accept whatever response is provided by their local resolver... Correct or malicious. |
|
| > Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.
dnsmasq has this feature. I think it’s commonly available in alternative router firmware. On my home network, I set up https://pi-hole.net/ for ad blocking, and it uses dnsmasq too. So as my network’s DHCP + DNS server, it automatically adds dns entries for dhcp leases that it hands out. There are undoubtably other options, but these are the two I’ve worked with. |
|
| > it's helpful to have that flexibility in the future
On the contrary, it is helpful to make this is impossible. Otherwise you invite leaking private info by configuration mistake. |
|
| It's reserved per RFC 6762:
> This document specifies that the DNS top-level domain ".local." is a special domain with special semantics, namely that any fully qualified name ending in ".local. https://datatracker.ietf.org/doc/html/rfc6762 Applications can/will break if you attempt to use .local outside of mDNS (such as systemd-resolved). Don't get upset when this happens. Interesting fact: RFC 6762 predates Kubernetes (one of the biggest .local violators), they should really change the default domain... |
|
| But that's an IETF standard, not an ICANN policy. AFAIK there's nothing in place today that would _prevent_ ICANN from granting .local to a registry other than it just being a bad idea. |
|
| It does! I generally assume mDNS to just be available on every device these days. But I've also seen managed environments where mDNS has been turned off or blocked at the firewall. |
|
| Ever since this kind of stuff was introduced I've been annoyed that there is no way to disable it for yourself. And it's allowed for straight up evil stuff like google buying the .dev TLD |
|
| https://www.ietf.org/archive/id/draft-davies-internal-tld-00...
There are certain circumstances where private network operators may wish to use their own domain naming scheme that is not intended to be used or accessible by the global domain name system (DNS), such as within closed corporate or home networks. The "internal" top-level domain is reserved to provide this purpose in the DNS. Such domains will not resolve in the global DNS, but can be configured within closed networks as the network operator sees fit. This reservation is intended for a similar purpose that private-use IP address ranges that are set aside (e.g. [RFC1918]). |
|
| 1. Buy .intern TLD
2. Sell to scammers. 3. Profit. (I want to appreciate how hard it probably is for ICANN to figure out proper TLDs.) |
|
| Either pin the appropriate server cert in each application or run your internal CA (scoped to that domain via name constriants) and deploy the root cert to all client machines. |
|
| I just got myself a proper domain name. You can get a domain for pretty cheap if you're not picky about what you get. You could for example register cottagecheese.download on Cloudflare for about $5/year right now.
I have my domain's DNS on Cloudflare, so I can use DNS verification with Let's Encrypt to get myself a proper certificate that works on all of my devices. Then I just have Cloudflare DNS set up with a bunch of CNAME records to .internal addresses. For example, if I needed to set up a local mail server, I'd set mail.cottagecheese.download to have a CNAME record pointing to localserver.internal and then have my router resolve localserver.internal to my actual home server's IP address. So if I punch in https://mail.cottagecheese.download in my browser, the browser resolves that to localserver.internal and then my router resolves that to 10.x.x.x/32, sending me to my internal home server that greets me with a proper Let's Encrypt certificate without any need to expose my internal IP addresses. Windows doesn't seem to like my CNAME-based setup though. Every time I try to use them, it's a diceroll if it actually works. |
Using a publicly valid domain offers a number of benefits, like being able to use a free public CA like Lets Encrypt. Every machine will trust your internal certificates out of the box, so there is minimal toil.
Last year I built getlocalcert [1] as a free way to automate this approach. It allows you to register a subdomain, publish TXT records for ACME DNS certificate validation, and use your own internal DNS server for all private use.
[1] https://www.getlocalcert.net/