-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Work around docker hub rate limits #103
Comments
If it's "just for your local", auth'ing your local Docker daemon raises DH's limits for auth'ed users. Not sure if you're or not. |
Thanks. Yes I'm authed otherwise the github actions would not be able to push the images to the registry. Here's how each image are built/pushed: https://github.com/Silex/docker-emacs/blob/master/.github/actions/build/action.yml I just pushed something that sets the max jobs to 1 at a time. Maybe it'll be enough for now... but I doubt it. To pass under then 200 pulls in a 6h period I'll also need to add some sleep() 😞 But yes, a registry proxy that is updated once in a while would work, not sure how you tell docker to use that proxy tho. |
Ah, just found this https://engineering.deptagency.com/how-to-speed-up-docker-builds-in-github-actions:
Sounds like the way to go, will give it a try. |
Meh.
But the page mention using a registry cache and there's a Github Container Registry. I guess I could build & cache my images to this, and then only push to the docker hub. That requires some refactor and more secret token tho, not something I have time for at the moment. Will look into it beginning of august. |
Actually this won't fix the problem of I really need a registry proxy cache. Will need to google more. |
I've seen on the results excerpts of a cursory google search (not clicking any link) that there are some "dummy" proxies made just from standard HTTP services (squid, nginx, etc.), no special logic involved, apparently. This might simplify your solution. (Original reply, as intended for a previous comment, before you posted th Alpine update) |
@pataquets: thanks, you can help figure out how I should use renovatebot/renovate#9958 which apparently allows to use gitlab's dependency proxy. The goal is not to modify the dockerfiles, but as a plan B I see we could also do |
Hi, @Silex. |
Continuing in #106 |
What I did with ghcr.io is better, but I still hit docker hub limits sometimes, because of Using a proxy cache might help for those... but at this point I'm considering ditching docker hub. Or switching just these images to ghcr.io, but that means I'll need to maintain |
Made most of the images have "FROM ghcr.io", will see how this affects pull limits. If not sufficient, will also have It's a shame there's no public mirror of docker hub in ghcr.io |
Wouldn't help to also use your DH credentials for increased pull quota when pulling the steps' images? |
@pataquets I already use them. Recent fix seems to be enough, closing for now. |
I'm getting hit by https://www.docker.com/increase-rate-limits/ (https://github.com/Silex/docker-emacs/actions/runs/9769943725)
Basically I "pull too much" when building the images.
1st option: move to another registry (https://stackoverflow.com/questions/65806330/toomanyrequests-you-have-reached-your-pull-rate-limit-you-may-increase-the-lim), which I'm not a fan of because well, official images tends to be on docker hub.
2nd option: I wonder wether I could use some caching like https://github.com/marketplace/actions/docker-cache
3rd option: rate limit the build process so the ~200 pulls are spread over 6h sounds... but this sounds silly.
If anyone has insights about how to tackle this I'm all hear 😉
The text was updated successfully, but these errors were encountered: