![]() |
|
![]() |
| They're OCI images now, and Docker was largely a stolen idea from UNIXes* (unices?), including the term containers. As much as I like what Podman to open it up using Containerfiles and not defaulting to this, it might as well go even farther tweak the standard a bit - provide something like Dockerfile that's less golang-inspired and more linux-inspired, and improve the format for images - so the industry can move on from Docker lock-in.
* https://www.aquasec.com/blog/a-brief-history-of-containers-f... |
![]() |
| I guess it's kind of like assembly where COMMAND foo is like INSTRUCTION foo. Maybe it just clicked for me and I can't think of a syntax I'd prefer for it. Which now has me wondering about modeling syntaxes for other things on it.
For some reason it reminded me of the WAC model from WebAssembly component model https://component-model.bytecodealliance.org/creating-and-co... No particular comparison, but I'd like to understand how constructing a container image might compare to constructing a wasm module from components. |
![]() |
| Co-opted it is what I mean by stole.
The Dockerfile takes from golang IMO, it's intentionally very low on syntax. Just like go's text/template and html/template. |
![]() |
| Block the DNS if you don’t want dockerhub images. Rewrite it to your artifactory.
This is really not complicated and your not entitled to unlimited anonymous usage of any service. |
![]() |
| Why? We are running the exact same images that we would be mirroring into and pulling from our private registry if we were doing that, pinned to the sha256sum. |
![]() |
| How can you make Docker pull debian:latest from your own registry instead of the official Docker registry, without explicitly specifying |
![]() |
| It's not significantly different to the current rates (based on https://web.archive.org/web/20201101055027/https://www.docke... and https://web.archive.org/web/20250000000000*/https://www.dock...), 6 less pulls for free per hour, 7 more for authenticated, but it's now less forgiving to larger bursts.
Ironically, it's the paid rates that are being reduced more (though they don't have hourly limits still, so more flexibility, but the fair use thing might come up), as they were infinite previously, now Pro is 34 pulls/hour (on average, which is less than authenticated), Team is 138 pulls/hour (or 4 times Pro) and Business 1380 pulls/hour (40 times pro, 10 times team). My feeling this is trying to get more people to create docker accounts, so the upsell can be more targeted. |
![]() |
| Artifactory is a tire fire though.
Someone (maybe the podman folks?) should do what every Linux distribution has done, and set up a network of signed mirrors that can be rsynced. |
![]() |
| Is there an easy way of changing the default repository that's pulled from when you issue a 'docker pull |
![]() |
| If the electricity were generated by thousands of volunteers pedalling in their basement, then yes, I would expect the utility company not to be too greedy. |
![]() |
| > Can't believe the sense of entitlement in this thread. I guess people think bandwidth grows on trees.
bandwidth is super cheap if you dont use any fancy public cloud services. |
![]() |
| The bigger the service, the more financial incentive they have to be smart and not pay absurd prices for things, since they can give themselves higher profit margins by controlling their costs. |
![]() |
| They don't charge you for hosting, uploading or downloading images.
Without dockerhub you would have to host your own repository, which would cost money. |
![]() |
| Most of the OSS projects I use seem to either have moved to the GitHub container registry or some other (smaller) equivalent. Some have even set up their own registries behind Cloudflare. |
![]() |
| I find it surprising that people notice the part about symptoms[1], and despite this happening repeatedly we do relatively little against the part about spreading.
Part of it is perhaps by definition, “spreading” already assumes success. Still, I’d welcome some regulation; or at least awareness; e.g. a neologism for companies in that stage, growing at cost and only getting ready to develop symptoms. [1]: The American Dialect Society selected “Enshittification” as its 2023 word of the year, source: https://en.m.wikipedia.org/wiki/Enshittification |
![]() |
| The web is absolutely littered with docker tutorials and a huge proportion of them (not operated or maintained by docker themselves) would no longer be valid, I'm sure. |
![]() |
| Manifests are tacked on afterwards, and have a lot of complexity that I'm not sure most folks have actually thought through.
Ex - lots of refs are to "multi-arch" images, Except... there's no such thing as a multi-arch image, the entire identifier is just a reference to a manifest that then points to a list of images (or other manifests) by arch, and the actual resolved artifact is a single entry in that list. But it means the manifest needs to be able to reference and resolve other names, and that means including... names. For a more concrete example, just check https://github.com/moby/moby/issues/44144#issuecomment-12578... Basically - the digests weren't intended to support image verification across repos, and the tool doesn't treat them that way. The digest was intended to allow tighter specification than a tag (precisely because a publisher might push a different image to the same tag later). |
![]() |
| not sure about the old docker image format, but most modern tools use OCI image format, and that doesn't embed the image name in the manifest, just digests, so it's totally portable everywhere. |
![]() |
| Huh? If you don't like having back-up registries, just specify one. You can also always use a fully qualified image name if you want to source from a specific registry. |
![]() |
| This sucks for individuals and open source. For folks that have a heavy reliance on dockerhub, here are some things that may help (not all are applicable to all use cases):
1. Setup a pull through mirror. Google Artifact Registry has decent limits and good coverage for public images. This requires just one config change and can be very useful to mitigate rate limits if you're using popular images cached in GAR.[1] 2. Setup a private pull through image registry for private images. This will require renaming all the images in your build and deployment scripts and can get very cumbersome. 3. Get your IPs allowlisted by Docker, especially if you can't have docker auth on the servers. The pricing for this can be very high. Rough numbers: $20,000/year for 5 IPs and usually go upwards of $50k/year. 4. Setup a transparent docker hub mirror. This is great because no changes need to be made to pipelines except one minor config change (similar to 1). We wrote a blog about how this can be done using the official docker registry image and AWS.[2] It is very important to NOT use the official docker registry image [3] as that itself can get throttled and lead to hairy issues. Host your own fork of the registry image and use that instead. We spent a lot of time researching this for certain use cases while building infrastructure for serving Github actions at WarpBuild. Hope this helps. [1] https://cloud.google.com/artifact-registry/docs/pull-cached-... |
![]() |
| Register for free and you get a higher limit: 40 pulls is plenty. What do you imagine running that requires more than 40 dockerhub (not local) pulls on an hourly basis? |
![]() |
| if i start an eks cluster in a NAT environment with 10 nodes and 4 daemon sets. I need 40 pulls by default. Lots of tutorials out there to do this that will no longer work as well. |
![]() |
| These platforms do cache quite a bit. It's just that there is a very high volume of traffic and a lot of it does update pretty frequently (or has to check for updates) |
![]() |
| > A change to a lower layer invalidates all layers above it
Does it have to? It seems it should be possible to diff the layers and only invalidate if there are conflicts. |
![]() |
| Even if I have doesn't matter, Rancher is what we are suppose to use, and not using it means you're out of step with whats supported and such, which I don't find to be a good place to be. |
![]() |
| Do you mean "Rancher" Rancher, or Rancher Desktop? Those are two different things. I have found the latter to be a Just Works™ app that's miles ahead of Podman Desktop. Now, that one is a mess. |
![]() |
| Yes, but in this case it's not the problem. It's more about not accepting `?ns=docker.io` as a query parameter on an endpoint, so a rather small and isolated technical issue. |
![]() |
| Did this for years at my previous job to defend against the rate limits and against dependencies being deleted out from under us with no warning. (E.g. left-pad.)
Nexus is very easy to set up. |
![]() |
| Catching, vulnerability scanning, supply chain integrity, insurance against upstream removal. All these things are true for other artifact types as well.
Own your dependency chain. |
![]() |
| > 10 per IPv4 address or IPv6 /64 subnet
Finally, a use for IPv6! I assume so anyway, as I think ISPs that support ipv6 will give you multiple IPv6 /64 spaces if requested. |
![]() |
| (I work there) If you have a support contact or AE they can tell you if you need an official source. Marketing communications should be sent out at some point. |
![]() |
| It’s their business choice, but they’re no longer the only option, nor in my opinion the best one.
Vote with your feet and your wallets. |
![]() |
| These dates have been delayed. They will not take effect March 1. Pull limit changes are delayed at least a month, storage limit enforcement is delayed until next year. |
![]() |
| I am mainly mentioning this with regards to Azure and other providers egress prices. And in Europe, onprem stuff is expensive if you are peering to other countries. |
![]() |
| bandwidth is cheap as hell
egress in the cloud is deliberately expensive as an anti-competitive measure to lock you in and stop you using competitors services |
![]() |
| This is only for Docker Desktop. The Docker Engine itself is free (AFAIK). If you're on Linux, you probably don't care about Docker Desktop at all. |
![]() |
| The university could allocate public IPv4 (or IPv6) addresses to their computers. Most did this in the past, but many stopped once people who didn't understand IP retired. |
![]() |
| Does anyone recommend some pull-through registry to use? Docker Docs has some recommendations [0], but I wonder how feature complete it is.
I'd like to find something that: - Can pull and serve private images - Has UI to show a list of downloaded images, and some statistics on how much storage and bandwidth they use - Can run periodic GC to delete unused images - (maybe) Can be set up to pre-download new tags IIRC Artifactory has some support for Docker images, but that seems like a big hammer for this problem. [1] [0] https://docs.docker.com/docker-hub/image-library/mirror/ |
![]() |
| Depending on your use-case, the default registry mirroring you've linked works well.
It... does not have a UI or the GC/pre-download stuff, but it absolutely works for private images (see: https://distribution.github.io/distribution/recipes/mirror/#...) I've been using it as a cache for a while locally and it's a solid choice. --- I guess an edit - it does also have basic TTL, which might cover your GC case, but it's not very configurable or customizable. It's literally just a TTL flag on the proxied image. |
![]() |
| harbor should have enough features and is popular/rising, otherwise Artifactory will do everything you imagine but is quite heavy both on resources and configuration. |
![]() |
| There is, add this to your /etc/docker/daemon.json:
Where the URL points to your pull-through docker hub cache. |
![]() |
| Could you spin up something like a Steam cache but for Docker? So when someone in your network pulls an image, it gets cached and served to subsequent pullers of the same image. |
![]() |
| So sometimes when I do docker run I see a bunch of progress bars pulling images and layers. Are all of them counted individually or it all just counts as one? |
![]() |
| Not even March yet and I’m running into issues. Im on a static IP so no NAT and it’s still asking for auth on like the second pull.
Will need to find a way to kick docker.io to the curb. Ridiculous |
![]() |
| Why is there no P2P option?
I would be happy to give back to the community by hosting a container p2p host. would that be even possible out of the box? |
![]() |
| If they choose to do this, hopefully they report the reason for the pull failure when you try rather than a nonsense 404 error. |
![]() |
| This will heavily impact the business relying on Docker for Teams, or Teams using unauthenticated systems when deploying or testing. |
![]() |
| 10 pulls per hour per IP will even impact my homelab deployment if I'm not updating for a few weeks and I bump version of every software I run at once. |
![]() |
| Sure, they were encouraging usage of the docker hub, but it's been at least a couple of years since they started pushing on the other way, when they introduced the first rate-limits.
If everybody did a fair-use of the Docker Hub maybe we wouldn't have the rate-limits in the first place? But I think we all learned that won't be happening in the open Internet. Setting up a pull-through cache is pretty straight-forward, you can find the instructions in Docker's documentation: https://docs.docker.com/docker-hub/image-library/mirror/ |
![]() |
| See my comment above for the numbers (https://news.ycombinator.com/item?id=43127004), but the free limits haven't changed in magnitude, rather they've reduced how bursty the requests can be (which is somewhat interesting, in that for the free levels you'd expect the usage to be more bursty, and the more paid levels to be more consistent given more workers and more tooling happening at all hours).
|
![]() |
| There should be a way to pirate these.
Really, all this networking expertise floating around, and Docker artifacts already being content-addressable, there should be a way to torrent them. |
![]() |
| Never do by choice, distribution repos are good enough for me.
The bigger problem is when projects only officially ship as docker images for some banal reason. |
For residential usage, unless you're in an apartment tower where all your neighbors are software engineers and you're all behind a CGNAT, you can still do a pull here and there for learning and other hobbyist purposes, which for Docker is a marketing expense to encourage uptake in commercial settings.
If you're in an office, you have an employer, and you're using the registry for commercial purposes, you should be paying to help keep your dependencies running. If you don't expect your power plant to give you electricity for free, why would you expect a commercial company to give you containers for free?