-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auth caching issue #5883
Comments
This is somewhat expected in multi-user use cases as buildkit daemon does not provide isolation between different users. The content from the first build is matched against the second build, but in order to push it with cross-repo mount the second build needs to prove that it had access to the original source. There might be some hacky fix for this if we can detect the conditions for cross-repo-mount not working and fall back to inefficient reupload of layer bytes. |
I wonder - how people even use buildkit daemon with gitlab? they are lucky enough to have same access rights for all users? It is pretty strange for me, why one job need access to completely different repo. In my opinion daemon instructed to build image - we provide it with Dockerfile and context, we provide it with credentials which is enough to build an image, we instruct where get caches and where to put results. If daemon sees "similar" layers in previous builds - lets just pick them from local cache and reuse them. If he can't and have to pull it from somewhere else location, not mentioned in current build job - just rebuild it not using cache. At first glance I thought daemon is cached credentials with first build job ("to access registry.mydomain.com use this. always") and failed to push to different image on same registry. If so - maybe just add a switch to turn off auth caching? |
You are pushing to |
Exactly! It is different project with different team and different access rights. Only thing they have in common - they are both written on python and use very similar looking Dockerfiles based on |
My team recently ran into the same issue. We're currently exploring our options. |
I am not sure if this should be considered a feature request or a bug. First, it's important to know that each project within Gitlab has it's own instance of a Container Registry, see Gitlab Container Registry @baznikin My assumption is that your pipeline is authenticating with the This means that when a build is completed and it's using layers from the local cache that have been pushed to projects other than the current project, we get this error. This is because the It seems Buildkits local cache's existing behavior operates under the presumption that all builds will be pushed to the same Image Registry - this does not align is not how Gitlab works. To point out the obvious (maybe not so obvious)- by isolating all of the registries, Gitlab has lots of layer duplication across projects in their respective caches....Which his not great, but is by design with the isolated registries. What I see as potential solutions for this are the following:
If you have made it this far, thanks for listening! My team has ran into the same issue this week when attempting to switch build systems. |
Exactly!
Yeah, basically looks like buildkit "claim" layer to specific repository when it first time pushed to repository. I do not see why it is required... I suppose it should buld image using its caches and provided context and then just push to specified destination using specified credentials. Don't "bind" layers to specific repository or do not cache auth credentials. Maybe I am wrong in implementation details, I didn't read sources.
or |
Contributing guidelines and issue reporting guide
Well-formed report checklist
Description of bug
Bug description
We run buildkit-daemon to build docker images with GitLab. We face issues which, I believe, because of some sort of auth caching.
groupA/repoA
. Build is successfulgroupB/repoB
. Build failed with errorserver message: insufficient_scope: authorization failed
. In build logs mentionedgroupA/repoA
:If restart buildkit daemon in run jobs in different order - situation is changed.
First time encounter this issue 4 days after we implement buildkit-daemon. After some restarts and running same versions of cli (it was
rootless-master
, became0.20.1
) and daemon (it was0.18
, became0.20.1
) issie vanished and returned today, when I try to build image for completly isolated project and user, who have no access to other repositories. Upon studying of logs I notice mention of completly different project, to which this limited user have no access and which didn't mentioned in pipeline either - obviously it came from buildkit itself!Reproduction
I have no particular reproduction scenario; logs follows. Logs was redacted by substituting sensitive information like project names, group names and domains with placeholders.
CLI logs, as I see them in GitLab:
buildkit daemon log and trace for this build - https://gist.github.com/baznikin/9bd860a22a96b0bbbf5cef9601e76b44
Version information
daemon:
moby/buildkit:v0.20.1-rootless
cli:
moby/buildkit:v0.20.1-rootless
daemon installed with Helm chat using terraform, resource declaration:
Daemon called with GitLab pipeline template this way:
The text was updated successfully, but these errors were encountered: