-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retagged images no longer replicating after v2.11.1 upgrade #20897
Comments
It is speculation, but we wondered if https://github.com/goharbor/harbor/pull/20838/files might be the culprit. |
Hm. I don't have a bead on it yet, but there may be another complexity to this issue. I am now finding evidence that
Our best speculation right now is that perhaps the first jump ( E.g.
|
thanks @cayla for your reporting, I will try to reproduce it at my end and get back to you if any finding. |
I'm experiencing the same issue since the upgrade from 2.11.0 to 2.11.1. It seems @cayla is right about it only happening with images that were retagged. Would it be safe to downgrade to 2.11.0 for the time being? The migrator image did not report any changes, but I believe that only applies to the configuration, not the database. I'm running into this issue a lot, so a temporary downgrade would make life a lot easier :-) |
We are also experiencing the same issue. We use the following two tools/variations for tagging the same digest multiple times and both do not trigger the event_based replication. The Happy to provide more details as required, however I think this is a pretty clear regression. CI Build Step:
CI Release Step:
Previously three seperate executions were triggered for a CI release pipeline, however now there's only a single one and the two "retagged" ones are missing. |
@wy65701436 could you by any chance tell if a downgrade to 2.11.0 is possible? I'm using a Docker based installation, so it should be fairly straightforward, as config hasn't changed between 2.11.0 and 2.11.1, but I can't tell whether any database migrations have been performed in that update. |
We have the same issue after upgrading to Harbor 2.11. To mitigate the issue we replaced the retagging in the pipelines with an harbor api call which adds the new tag. This triggers the event based replication just fine. Another alternative is to trigger the replication itself by a harbor api call as there is no button in the UI to manually start event based replications. |
I had considered the API route as well. But, for triggering replications, an account with administrative privileges is required. Since the API only supports basic authorization that's a no-go, as I'm not storing the password of such an account anywhere. We've got dozens and dozens of separate replications set up in our Harbor instance. I've been converting them from event based to scheduled on-demand (as in: developer reports a missing image on the upstream repository). I've got loads of replications set up to scheduled 2 minute intervals, which in turn causes the job service logs to rapidly fill up the server's disk space. If only someone could confirm that downgrading to 2.11.0 is an option... |
@LinuxDoku wow... can't believe I've never looked at that. I thought robot accounts were project-only, useful only for pulling and pushing images. Thanks for pointing it out! |
@wy65701436 any update? |
I have the same problem on versions 2.11.1 2.12.0 |
I have the same issue - no events are triggered when adding a tag to an existing image in a project. |
The same issue faced today. |
I don't suggest any downgrade, and I am trying to fix this problem and provide patches for the community. |
Could it be that it also no longer replicates helm charts? I have one "event_based" replication run from 3 days ago. It claims "success", but the log just says
(if I observe correctly, for Container images, the event is not even triggered) |
To address the issue described in #20828, the commit introduced a regression. I propose reverting this change, which will allow the issue (#20828) to be reopened. This will primarily affect parallel artifact pulls from the proxy cached project, resulting in duplicate audit logs. I plan to resolve #20828 in v2.13, and will revert the relevant commits in both v2.12 and v2.11. Please wait for the upcoming releases: v2.11.3 and v2.12.1. |
fixes goharbor#20897 Signed-off-by: wang yan <wangyan@vmware.com>
fixes #20897 Signed-off-by: wang yan <wangyan@vmware.com>
Reverting from 2.11.1 to 2.11.0 fixed the issue (fortunately 😃) |
fixes goharbor#20897 Signed-off-by: wang yan <wangyan@vmware.com>
fixes #20897 Signed-off-by: wang yan <wangyan@vmware.com>
@wy65701436 Let's not forget to open a new issue to track the work in v2.13.0 |
Reopend #20828 and marked as target/2.13.0 |
Expected behavior and actual behavior:
Expected behavior: when we push an existing image with a new tag, replication should trigger
Actual behavior: event based replication never occurs for re-tagged images, but manual replication still works.
We have a workflow where we add additional tags to existing images to acknowledge CI completion and our release images.
The former is that we tag images as
[normalized branch name]-[git commit hash of HEAD of that branch]-[timestamp of same HEAD commit]-dirty
on the initial build.E.g.
foo/app:main-d1028194b3-1725449681-dirty
After CI passes, we drop the
-dirty
suffix likefoo/app:main-d1028194b3-1725449681
(Generally with something like
docker buildx imagetools create -t ${image}:${TAG} ${image}:${DIRTY_TAG}
in the CI scripts).We do something similar with our release images. We take a
foo/app:main-d1028194b3-1725449681
and retag itv1.2.3
This is our replication configuration:
Since upgrading to 2.11.1 (via your helm release's https://github.com/goharbor/harbor-helm/releases/tag/v1.15.1), we have noticed that no image that is retagged ever triggers replication.
For example, here is this morning's tasks:
We retagged like so:
As you can see there was no corresponding
event_based
trigger from this push. (We manually triggered replication at 10:03 and this succeeded. And we have waited much longer than 3 mins in the past -- 30m +. It consistently never fires on its own).Here is the replication log from that manual trigger:
log.txt
Steps to reproduce the problem:
Versions:
Please specify the versions of following systems.
Additional context:
The text was updated successfully, but these errors were encountered: