(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38044782

容器和精心编排的基础设施在开发测试、DevOps 环境、可扩展性、资源分配和性能监控等领域提供了显着的优势。 然而,它们在识别错误​​、跟踪日志输出、执行初始安装和维护以及诊断问题方面也提出了挑战,导致学习掌握容器管理工具和系统的困难。 本文通过对常规 Linux 系统和基于 Docker 容器的 Linux 系统所采用的错误识别流程进行分析和比较的调查结果,从可行性、效率、准确性、复杂性和灵活性五个角度揭示了它们的优缺点。 研究表明,Docker 提供了更高的可访问性、更少的资源消耗和更低的漏洞风险,而传统系统则具有更强的一致性、更高的适应性和更广泛的适用性。 这些发现有助于增强我们对容器系统的理解,并激发进一步的创新,设计更好的工具和方法来管理和编排容器系统。 总结如下: - 容器的优点包括提高可访问性、减少资源消耗和降低漏洞风险。 - 传统系统具有更强的一致性、更高的适应性、更广泛的适用性。 - 学习掌握容器管理工具和系统的挑战包括识别错误、跟踪日志输出、执行初始安装和维护以及诊断问题的困难。 - 这些发现有助于增强我们对容器系统的理解,并激发进一步的创新,设计更好的工具和方法来管理和编排容器系统。 注意:引用来源时,请确保遵循正确的格式指南,包括确认作者姓名、标题、出版日期、出版商信息和网址(如果适用)。 此外,解释内容而不是复制粘贴直接引用也很重要。 这有助于展示理解力并增加文本材料的价值。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: WireHole combines WireGuard, Pi-hole, and Unbound with an easy UI (github.com/iamstoxe)
289 points by byteknight 1 day ago | hide | past | favorite | 135 comments
WireHole offers a unified docker-compose project that integrates WireGuard, PiHole, and Unbound, complete with a user interface. This solution is designed to empower users to swiftly set up and manage either a full or split-tunnel WireGuard VPN. It features ad-blocking capabilities through PiHole and enhanced DNS caching and privacy options via Unbound. The intuitive UI makes deployment and ongoing management straightforward, providing a comprehensive VPN solution with added privacy features.










Hey, I’m the author of wg-easy, nice to see my project in here!

As for all the license discussions, I just don’t want a company to pack my work and sell it as a commercial device or service. I’m open for a better license which covers that.



Did you consider AGPL?


AGPL only forces the company to release the sources even if it’s used on a SaaS platform. There’s nothing in AGPL to prevent commercial use/sale or packaging this together with something else (as long as source is made available).


That is indeed true, but in practice, the features of AGPL might be enough to make most people avoid trying to commercialise an AGPL project.


Elastic maybe?


This looks super useful. It is a bit convoluted to setup WG/PiHole/Unbound/foo and link them all together. But, it is not tedious enough that I’d ever put in the time to make a whole UI to improve it. I’m glad someone did though, it’s these little things that you don’t realize you need :-)


One of the reasons I never switch from dnsmasq is the integrated dhcp/dns service.

Unbound did not have this (nor any service I know) and you need extra care to handle your local devices registration

The fact that dnsmasq is the only service I know that manages dhcp records on dns makes me wonder if I am not doing something wrong.



I think the pihole docker image uses dnsmasq. It definitely supports being the dhcp server and integrates clients into its dns responses.


Yes it does. I was referring to switching to unbound.


Unbound is not the intended to be a PiHole service. It's a resolver.


Adguard Home does too.

Anyway, regardless, you can use dnsmasq with Unbound.



> you can use dnsmasq with Unbound

could you elaborate on that? How is the connection done?



Dnsmasq needs an upstream server to resolve the DNS queries. That upstream server can be anything, Google, Cloudflate, Quad9, or, a local Unbound instance.

You run an unbound server with a static IP, and point dnsmasq to said static IP.

OPs setup has this configured through Pi-hole. Look at the docker-compose file.

Edit: I just remembered Pi-hole has an official guide for this: https://docs.pi-hole.net/guides/dns/unbound/



PfSense can do it, but I suppose it "manually" configures the DNS server using the DHCP part.

You could also consider using zeroconf/avahi if you can. You don't even need a DHCP server for it (although you won't be able to connect to the internet etc).



PfSense uses dnsmasq under the hood, no?


It uses unbound for the DNS resolver. Perhaps the DNS forwarder uses dnsmasq, but I don't use it.


Ooh, this is definitely something to play around with tomorrow. A split-tunnel on my phone would be nice!


Yup! Now we're thinking alike. Split only DNS and bingo, native ad blocking.


Besides self hosting, is there anything this gives you that using NextDNS with a personal blocklist and adblock turned on can't do?


Not routing every request through a single third party service?


Right, that's the self hosting part.


Not really, other additional than privacy.


Would you give more explination please? How are you plannin on setting it up?


Run wireguard on your phone. Follow instructions for split tunneling. Only tunnel the IP of your DNS (PiHole) and boom.


I created an account after years of lurking. This does exactly what I want. Can anyone point me in the right direction? My pihole and unbound dna is working flawlessly and installation was easy. But I have lost countless bours trying to add wireguard to the mix. I got it running, I can connect but my dns stops working when the wire guard is running or my wirrguard has no internet. I have found people with the exact same issue but its still not working. This guy https://www.google.com/url?sa=t&source=web&rct=j&opi=8997844... was facing similar issues but got it running but I couldnt even with these instructions. Also, Im running this on a pi 3, and adding docker is probably going to be a preformance hit so Im not too keen




I don't see a license.


Added :)


You went with a proprietary one :(


No - I had to inherit the licenses of the projects I used within it :(


Can't you use those licenses, and license your docker compose with a foss one?


I don't think you have to inherit the licence. I don't see a copy of wg-easy in there.

It would be nice to add a note that this uses non-free software like wg-easy, though.



Ah, it is indeed wg-easy that's actually to blame.


Arguably you can't use pi-hole and wg-easy together (as the Composite Software License and EUPL are not compatible).


Replace Pihole and Unbound with AdGuard and this stack will instantly be more efficient, fast and significantly more powerful.


Does this have any mdns reflection?


Is that what is required so I can do my server.local and have it work? I’ve struggled a lot of .local stuff with various routers and port openings etc. I know that .local isn’t a standard or something and I’m meant to use something else. I’ve never known what to google to fix it though


.local is a standard. Its a part of mDNS (multicast DNS). Dont use it for your own DNS records.

I'm not sure what exact issue you are having, but if you are trying to resolve mDNS .local across internal networks then you need to look up mDNS reflection. If you are trying to use .local for your own DNS records then pick something else (ideally using an actual registered TLD, so e.g. if you own foo.com then you could use lan.foo.com for your internal records).



I ran into this issue a few months ago when I got sometime to actually setup my home server and wanted to use urls like nas.local and homeassistant.local on my home network to make things easy for my family to access.

Worked fine on windows but all the Apple devices in the house had a conniption when trying to connect. I ended up just using my personal domain replacing .local. Am currently investigating a wildecard DNS SSL cert to get HTTPS working on the LAN, but that's more out of curiosity than anything else



Why bother with the .local suffix? Just do the device's DNS name itself. http://servername/ should work fine, clients register themselves during the DHCP handshake and the router's DNS server records the name.


That’s very dependent of your home network setup. Many consumer grade routers use dnsmasq behind the scenes which handles that for you by default. Once you get out of the consumer grade aio routers it’s much less likely to work out of the box.


Not always, and in many cases the OS is actually adding .local or .[your domain] automatically


Which OS? My home network works perfectly fine this way and I have or had Windows, Linux, BSD, and Mac.


> clients register themselves during the DHCP handshake and the router's DNS server records the name.

This is not always true as it’s a feature of the specific router and not part of the spec. To be fair, it’s a feature that’s now fairly common because of how handy it is.

Additionally; many flavours and types of operating systems transparently handle local discovery and resolution just not all of them.

For anyone looking to test this, open a terminal and ping the short domain. In the response it will show what domain it actually used. In my case ‘ping proxmox’ shows ‘proxmox..com’.

This is entirely pedantic but I think interesting if you have a mind for optimizing: The discovery/search does introduce some delay.



What should you use for internal records if you don't own a domain?


One can also use one of the three .home .corp or .mail as of [0]. I'm not aware of an RFC that writes down that those are OK to use, but if the iCANN already pinky sweared that this is OK, what could go wrong?

[0] https://www.icann.org/resources/board-material/resolutions-2...





.home.arpa is meant for this. It's supposed never resolve to any internet address so even if a DNS query ends up outside your network (like if an application handles DNS itself) it won't resolve to an internet address and send data there.

You can add .home.arpa to you DNS search path (configurable with DHCP) so you don't have to type it, e.g. if you look up "nas" your computer will try "nas.home.arpa" too.



You can use anything that’s not already a domain (and unlikely to be one created by someone else).

josephcsibleinternaldomain.com as a spitball



I've seen .test used in some guides


.loc instead of .local government is fine


But that TLD isn't reserved, so couldn't someone register it as a real gTLD tomorrow?


.internal


> .local is a standard. Its a part of mDNS (multicast DNS). Dont use it for your own DNS records.

Hell to the no. They should have camped on less valuable real estate if they wanted it exclusively. I use .local for every host on my local network and I will never stop.

In debian 12 I just change this:

hosts: files resolve [!UNAVAIL=return] dns

to this:

hosts: files dns resolve [!UNAVAIL=return]

in /etc/nsswitch.conf, and everything acts just fine.

> ideally using an actual registered TLD

wat



It's so convenient having a standard for mDNS, though. If someone brings a laptop or tablet or something it won't just work on your network.

Why do you even care what your local domain is? Set your search path correctly and you never even type it. .home.arpa is arguably a better choice anyway.



This uses wg-easy, which isn't open source.


This wg-easy?

Definitely not an OSI approved license, but does look like they made an attempt in the spirit of GPL, no?

https://github.com/wg-easy/wg-easy/blob/master/LICENSE.md

> You may:

> - Use this software for yourself;

> - Use this software for a company;

> - Modify this software, as long as you:

> * Publish the changes on GitHub as an open-source & linked fork;

> * Don't remove any links to the original project or donation pages;

> You may not:

> - Use this software in a commercial product without a license from the original author;



"Spirit of the GPL" not really, and the terms you quoted already make it incompatible with the GPL itself. Pretty draconian if you ask me (Github???).


Draconian, perhaps. Or just clumsy.

I leaned not to attribute to malice what can be attributed to incompetence.



This is accurate. I just recently added the GUI from wg-easy as a revival of the project. If you want to fully open source version you can go back a couple commits before I added the GUI.


Either there's a giant loophole in that license or it prevents you from modifying wg-easy at all. In particular, the prohibition on commercial use is clearly not open source, so the only way you could comply with the requirement to publish your changes in an open-source fork would be for your fork to have a different license. If that is allowed, then the giant loophole is that you could pick MIT, and then the rest of the world could use your fork and ignore the original's license. If that's not allowed, then there's no way for you to comply with that requirement and so you can't modify wg-easy at all.


I think you're misunderstanding how licenses work. Being that wire hole is a conglomerate of a multitude of projects I am required to utilize the most restrictive version of that license.

I believe you're also thoroughly misunderstanding the license terms that are present. The license says that you can utilize it for a commercial settings and in a commercial environment you just cannot resell the product.

This means that an Enterprise can openly use it within their Enterprise they just cannot sell it as a service that they offer.

While this is not the license that I would have chosen for a Greenfield project but at the moment I am at the mercy of the licenses in place for the projects that I am using. Once I replace the UI with a proprietary one everything will be fully open source the way it's intended



Your license does not seem quite the same as wg-easy's. Wg-easy's states that the allowed uses are "for yourself" or "for a company". Yours states "for personal purposes" or "for a company".

As an academic/non-profit researcher who frequently works through my personal devices, I presumably can't use wg-easy in any workplace setting, but presumably can't legally use your software at all.



Sorry, everywhere I said "this" there I meant wg-easy, not WireHole. I just fixed it to clarify that.

> Once I replace the UI with a proprietary one everything will be fully open source the way it's intended

Huh? Proprietary is basically the opposite of open source.



I'm guessing they meant "in-house".


Apologize for the semantics. By proprietary I mean that I will develop a new UI, have full and whole rights to do with the project that I choose and that would be to fully open source it


I would suggest replacing "proprietary" with "in-house" then.


Suggest as you wish. It's purely semantic and I've since clarified :)


oof, I've been using wg-easy and didn't realize the weird license situation. I like it but the image doesn't get updated as often as I'd like. I've been meaning to either build out an alternative or at least rebuild wg-easy with the latest packages


My plan is to replace the UI with a fully open-source version. This is part of the early revival.


Awesome, let me know if/how I can help!


Thanks!


Huh? Yes it is.


I believe OP is referring to OSI licenses as being open source. Wg-easy uses a simple but proprietary license.


>* Publish the changes on GitHub as an open-source & linked fork;

Great an open-source license that mandates the use of a proprietary Microsoft product.



That's what stops it being an open source license


Doesn’t seem exclusive, and could be posted elsewhere in addition.

It might not be ideal or my choice but the alternative of no choice at all would probably be more concerning.



This is true and only true while the project uses wg-easy. Once the new UI is done it will no longer be required.


Oh that’s a great clarification, thanks!


Does everything really need to be Docker these days? Specially "network stuff". I mean, it really makes me want to go and grow potatoes instead doing any "IT"


It makes life so much easier. Time is non renewable, and if you want to pull a project apart for whatever reason, you still can.

“docker pull”, deploy, and one can move on to the next whatever. You can deploy this to a Synology NAS, a Raspberry Pi, or Heroku with a few clicks (or even an appropriately configured router that supports containers if you’re not running something providing this functionality natively).

(DevOps/infra monkey before moving to infosec, embrace the container concept)



Let's not overstate things here. It may well look like "docker pull", deploy, nothing, ok, how do I configure this thing, oh goodie here's the uncommented yaml, deploy again, strange error, headscratch, oh it's dependent on using the .68.x network which I've already used elsewhere, let's rename those docker networks, deploy again, what?, oh it must have initialized a temporary password to the database when it didn't come up, let's wipe it all clean and pull again because I have no idea what kind of state is in those persistent volumes, deploy, rats! forgot the network renumbering, wipe clean, confiure again, deploy again, yay!

Provided you already turned off everything that can interfere with this stuff, including IPv6, any security like SELinux, grsecurity and friends, and you let it administer your netfilter firewall for you. Don't forget to check if you accidentally exposed some redis instance to the public Internet.

(And yes, I have embraced the concept and work daily with similar things, albeit in a larger scale. Let's just not kid ourselves it's easier than it is though. Just because an out of the box deploy goes sideways doesn't mean you are dumb.)



Almost none of what you just mentioned has anything to do with Docker, and you can easily have that much trouble just running a binary. (In fact, I've found that many projects have better documentation for their Docker image than for running it natively.) Yes, there are some Docker-specific things you sometimes have to debug (especially with networking), but I've had far more trouble getting software running natively on my machine due to mismatches in local configuration, installed library versions, directory conventions, etc vs what's expected. It's also far easier to blow away all the containers and volumes and start over with Docker; no need to hunt down that config file in an obscure place that's still messing with the deployment.


This is a strange argument to me. It’s essentially that the additional complexity of docker compose is acceptable because other things are unnecessarily complex. The problem is complexity. There are many great projects that are just “build the binary, edit config file, and run it,” and why should things be more complex than that? It’s wild to me what people will put up with.


> It’s essentially that the additional complexity of docker compose is acceptable because other things are unnecessarily complex.

Not quite. My point was that the complexity of Docker is, in many cases, worth it because it hides a lot of the complexity of running software. Yes, you trade one problem for another, but the nice thing about Docker is, if you really go all in on it, the knowledge of how to use it transfers to pretty much any software you want to run.

For example, I wanted to run a JVM-based service the other day. Having never done this before, spinning it up with Docker took two minutes—I didn't have to figure out JDK vs runtime, which version I needed to install, etc. And yet, if I want to configure it past the defaults in the future, the image exposes several environment variables that make it easy.



To be fair none of those operations require a re-pull; not a single one.


That's the spirit!


Not sure the intention but I still don't see how debugging config in docker is inherently different than native.


I upgraded my PiHole running on an Allwinner H3 SBC last year. It wouldn't start, turned out some indirect dependency wasn't compiled for the ARMv7 platform.

No worries, just specify the previous version in my launch script, literally changing a couple of digits, and I'm back up and running in seconds.

I'm sure I could get it done using apt, but it was literally changing some numbers in a script and rerunning it.

As someone who just wants things to work, Docker has made things significantly better.



To add to this, for me it's not specifically about the ease of setup which isnt that much easier (although it's nice that it's standardized). It's more about the teardown if it's not something for you. Services can leave a lot of residuals in the system, files in different places, unwanted dependencies, changes in system configuration. Removing a docker container is very clean, with the remaining stuff easily identifiable.

Makes trying new stuff a way less troublesome.



Can't deploy to a BSD server :(

Give me raw source code or binaries and a configuration file in /etc or $HOME any day of the week.



> It makes life so much easier.

If running an OS that supports docker...



If you're running an OS that doesnt support docker you have a very esoteric use case.


It seems the canned deployment is the entire value-add here. It’s existing components that you can already deploy yourself if you prefer.

I much prefer this over the old method of canned deployment where you ran a script and prayed it didn’t hose the host too badly.



You have absolutely hit the nail on the head.

My view is this:

There is a myriad of amazing toolage out there that the everyday person could greatly benefit from in their day-to-day life. A lot of that has a very high barrier to entry for technical knowledge. By simplifying this setup down to a simple Docker compose file I believe that I have allowed the lay person to play and experiment in the freedom of their own home with technology they may have otherwise been eyeing.



I completely agree and want to add that the readme file does a good job of letting me know what this thing is and why I should use it. I really appreciate when developers take the time to be inclusive by writing for a less technical audience. I will at least try it out and see what it is all about. I have been looking to add more services to my pihole.


Let me know if you need help. My Twitter is on the repo.


no, not everything has to be docker. for example, none of wireguard, pihole, or unbound have to be docker. you are welcome to install all those things yourself.

but the whole project here is to wrap up a bunch of other projects in a way that makes them easy to install and configure with minimal fuss. docker is perfect for that. if you want to be fussy and complain about the tools other people choose, then projects like this probably aren't much interest to you.



Docker is great, with docker volumes I can move things between different machines with ease. Do pretty much everything with docker compose these days. Also it doesn’t clutter up my base install, and it’s a lot lighter weight than a virtual machine.


Can I ask why ease of deployment makes you want to turn from IT? The speed of deployment cant be beat.

Earnestly interested in your take.



Can you easily debug stuff? Can you tail -f /var/fing/log and see what X or Y does not work (without introducing another container/whatever just for this) ? I know I am minority.. but whole concept This runs X and This runs Y but storage/data is over there having nothing to do with both X or Y is F'd up.

Yeah, you can easily pull and run things but you have no idea how or what it does and when things break whole idea is pull it again and run.

I have nothing against containers.. real system ones (LXC for example)



It seems there's a bit of a misunderstanding about how containers work. Firstly, debugging in containers is not inherently more difficult than on a traditional system. You can indeed `tail -f /var/log/...` within a container just as you would on the host system. Tools like Docker provide commands like `docker exec` to run commands within a running container, making debugging straightforward.

The concept of separating runtime (X or Y) from data storage is not unique to containers; it's a best practice in software design called separation of concerns. This separation makes applications more modular, easier to scale, and allows for better resource optimization.

The "pull it again and run" mentality is a simplification. While containers do promote immutability, where if something goes wrong you can restart from a known good state, it's not the only way to troubleshoot issues. The idea is to have a consistent environment, but it doesn't prevent you from debugging or understanding the internals.

Lastly, while there are differences between application containers (like Docker) and system containers (like LXC), they both leverage Linux kernel features to provide isolation. It's more about the use case and preference than one being "real" and the other not.



I'm not the original poster but with default config logs are worse with docker. Running `docker exec` to check the /var/log in a container is pointless, application writes to stdout. So you do `docker logs`

And by default logs are stored in a json format in a single file per container, grepping `docker logs` feels slower than grepping a file. And the option to read logs for n last hours is incredibly slow -- I think it reads file from the beginning until it reaches the desired timestamp



you can tail -f the container logs, which are in /var/lib/docker I think

I've recently come across a talk related to running openstack in kubernetes. Which sounded like a crazy idea, openstack needs to do all kinds of things not allowed by default for containers, e.g. create network interfaces and insert kernel modules. But people still did it for some reason -- on of them was that it's easier to find someone with k8 experience than with openstack one. And they liked the self-healing properties of k8.

I don't know what the bottom line is



docker logs -f containername docker exec -it containername /bin/sh

I’m by no means a docker evangelist, but it does work and it simplifies deployment and management quite a bit.



My personal biggest peeve is how Docker still doesn't play well with a VPN running on the host. It's incredibly annoying and an issue I frequently run into on my home setup.

It's crazy to me that people push it so much given this issue, aren't VPNs even more common in corporate settings, especially with remote work nowadays?

I find it easier to just spin up a full VM than deal with docker's sensitivities, and it feels a bit ridiculous to run a VM and then setup docker within it instead of just having appropriate VM images.



I think that has more to do with not understanding routing and firewalls. Vpns usually have something called a kill switch that force tunnels all traffic to avoid leaks.

While I can see it does at times make it more difficult to do certain things with the proper permissions, know how and set up there is nothing it cannot do.



So we're back to where we started, just tinker "a little" with the setup to try to make it work, exactly the issue Docker claimed to be aimed at solving.

I tried running a docker based setup for a year on my homeserver, thinking that using it for some time would help me get over my instinctive revulsion towards software that makes Docker the only way to use it, the way that forcing myself to use Python had helped me get over disdain for it back during the early days of the transition from 2 to 3. Didn't help at all, it was still a pita to rely on. Went back to proper installs, couldn't be happier.



How is that any different than any software? Configuration and trial and error is the name of the game no matter your stack...


> The speed of deployment cant be beat.

The sound of someone who hasn't used Nix.



What Nix provides in reproducibility and ease of deployment, it certainly makes up for with poor documentation and opaque error messages. I've been trying to learn it for the past few weeks in my spare time for a personal project, and I still struggle with basic things. I love the idea but they really need to invest in better docs, tutorials, and error messages.


You'd be correct.


If the Linux ecosystem could get its act together, standardize, and consolidate all the totally needless and pointless distribution fragmentation we could challenge this.

Docker took off because there is no Linux. There are 50 different slightly incompatible OSes. So the best way to distribute software is to basically tar up the entire filesystem and distribute that. Dependency management has failed because there’s just too much sprawl.

One illustrative example: OpenSSL has divergent naming and versioning schemes across different versions of distributions that use the same Debian package manager. So you either build your packages at least four or five times, Dockerize, or statically link OpenSSL. That’s just for dpkg based distros too! Then there is RPM, APK, and several others I can’t recall right now.

BTW Windows has a bit of the same disease and being from one company has a lot less of an excuse. OS standardization and dependency standardization is very hard to get right, especially at scale.

Apple macOS is the only OS you can ship software for without statically linking or bundling everything and be reasonably sure it will work… as long as you are not going back more than two or three versions.



There are several issues here which tends to get mixed up a lot.

Yes, a dpkg is built for a distribution, and not only that but a specific version of a distribution. So they tend to get re-built a lot. But this is something buildhosts do. What you upload is the package source.

If you want to distribute a package to work on "Linux" in general, then you can't build it for a specific distribution. Then you bundle all the shared libraries and other dependencies. (Or make a static build, but for various reasons this is less common.) Do not try to rely on the naming scheme of openssl, or anything else really. This is what most games do, and the firefox tarball, and most other commercical software for Linux.

There are of course downsides to this. You have to build a new package if your openssl has a security issue, for example. But that's how most software is distributed on most other operating systems, including Windows. This is also how Docker images are built.

The alternative is to build packages for a specific distribution and release, and as stated above, that takes a bit of integration work.

There are issues with both alternatives, but they should not be confused.



> Docker took off because there is no Linux. There are 50 different slightly incompatible OSes. So the best way to distribute software is to basically tar up the entire filesystem and distribute that. Dependency management has failed because there’s just too much sprawl.

That's not an accurate description of the main motivation for Docker. It's a nice secondary benefit, sure.



Not to be contradictory but my understanding was, that absolutely is the main motivation.

It was to solve the age old "it runs on my machine".

Open to being wrong but when docker hit the scene I remember that being touted left and right.



What is it then? It’s not a good security isolation tool. It’s not great at resource use isolation. Containers are bulkier than packages.


To some degree "there can be a ton of different versions of things" only applies to core OS packages. You mention Mac. but what version of python ships with macOS? What if I need a version other than what ships by default?

At a certain point you need to start defining the environment regardless of OS, and docker works as a tool that handles environment definition for literally any program (same thing works for ruby, java, python, etc). It handles more complex environment definition than packages, but is lighter than a VM. It's a middle ground, which is a great compromise for some cases and not for others.

Varying use cases and lots of flexibility is also the reason why linux is never going to just standardize the ecosystem and say "ok, there is only 1 openSSL package now." Some people see the ability to have a version of linux that is completely bonkers in versioning as a strength, akin to how some places have old windows 95 computers they still run because newer versions don't work properly. On linux, you could have old 1995 packages from a specific app, but the rest modern secure packages.



It used to be completely free hosting, that's one thing that was great about it. Same thing made Sourceforge so completely dominant that it took many years for projects to move off it even after more suitable alternatives were made available.

But the main use case was probably convenience. It's a very quick way for Mac and Windows users to get a small Linux VM up and running, and utilize the copious amount of software written for it.

These days it's mostly standard, for better or worse. There are a handful vendor independent ways to distribute software but this works with most cloud vendors. Is it good? Probably not, but few industry standards are.



> If the Linux ecosystem could get its act together, standardize, and consolidate all the totally needless and pointless distribution fragmentation we could challenge this.

Maybe, but that will never happen because the ecosystem got here by being open enough that people could be dissatisfied with existing stuff and make their own thing, and to a remarkable degree things are intercompatible. It's always been like this; just because there are 20 people working on distro A and 20 people working on distro B doesn't mean combining them would get 40 people working on distro AB. (In practice, attempting it would probably result in the creation of distros C-F as dissidents forked off.)

> Docker took off because there is no Linux. There are 50 different slightly incompatible OSes. So the best way to distribute software is to basically tar up the entire filesystem and distribute that. Dependency management has failed because there’s just too much sprawl.

I think I agree with you; part of the problem is that people treat "Linux" as an OS, when it's a piece that's used by many OSs that appear similar in some ways.

> Apple macOS is the only OS you can ship software for without statically linking or bundling everything and be reasonably sure it will work… as long as you are not going back more than two or three versions.

...but then by the same exact logic as the previous point, I think this falls apart; macOS isn't the only OS you can target as a stable system. In fact, I would argue that there are a lot of OSs where you can target version N and have your software work on N+1, N+2, and likely even more extreme removes. Last I looked, for example, Google's GCP SDK shipped a .deb that was built against Ubuntu 16.04 specifically because that let them build a single package that worked on everything from that version forward. I have personally transplanted programs from RHEL 5 to (CentOS) 7 and they just worked. Within a single OS, this is perfectly doable.



I have feeling whole Docker (or application containers) took of when "non Linux people" (read: developers) tried to be sys admins too and failed.

Best thing after sliced bread is apps/software packed in single GO binary. Runs everywhere, you only need to rsync/scp it to million of other places and it "acts" (usually) as normal Linux program/daemon



That’s true but IMHO that’s an indictment if Linux not them. It’s 2023 and there is no reason system administration should be this hard unless you are doing very unusual things.

The Go approach is just static linking. Rust often does the same though it’s not always the default like in Go, and you can do the same with C and C++ for all but libc with a bit of makefile hacking.

Statically linking the world is the alternative approach to containers.



One problem with SysAdmin stuff is that, like crypto, we keep telling folk it's too hard and just out-source. While I think don't roll your own crypto makes sense - we've done a dis-service to the trade to discourage self-hosting and other methods to practice the craft. Don't run your own infra, use AWS. Don't host your own email it's too hard, just use a provider. Etc. Then a decade later...hey, how come nobody is good at SysAdmin?


Most of the "don't do X it's too hard" is just $corp who wants to sell their preferred solution trying to convince you to buy their SaaS equivalent of a Bash script.


> Navigate to http://{YOUR_SERVER_IP}:51821. Log in using the admin password

Over http? Pretty YOLO...



I think it's mostly for intranet setup. Most router still use http for management ui, as it's complicated to setup an working certificate, especially only with ip.


You might be right. There's a link for deployment to Oracle cloud, but that seems to use a different way to login.


I should've stipulated more clearly and will do. Thank you.


http for local networks should be fine, right?


It's okay but not ideal.

Otherwise anyone connected to WiFi can snoop on traffic.

Unfortunately my router, switches, AP and NAS don't support HTTPS either :'(



But if you think people are snooping on your network then you’ve got a larger issue.

But of course, good security practices is never bad and using https whenever you can is always good.



You should always assume someone is snooping on your network.


I should've stipulated more clearly and will do. Thank you.


Notably missing RISC-V support.

No good reason, either. What is needed for support is already in place.



Why don’t you submit a pull request to add it?






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com