-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distinguish between different VDDK validation errors #969
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @jonner for taking that issue and posting this fix!
I just realized that this is probably not a full fix. Example scenario:
|
There are multiple cases that can lead to a "VDDK Init image is invalid" error message for a migration plan. They are currently handled with a single VDDKInvalid condition. This patch adds a new error condition VDDKImageNotFound (and a new advisory condition VDDKImageNotReady) to help diagnose an issue of a missing image or incorrect image url. When the initContainer cannot pull the VDDK init image, the vddk-validator-* pod has something like the following status: initContainerStatuses: - name: vddk-side-car state: waiting: reason: ErrImagePull message: 'reading manifest 8.0.3.14 in default-route-openshift-image-registry.apps-crc.testing/openshift/vddk: manifest unknown' lastState: {} ready: false restartCount: 0 image: 'default-route-openshift-image-registry.apps-crc.testing/openshift/vddk:8.0.3.14' imageID: '' started: false So we use the existence of the 'waiting' state to indicate that the image cannot be pulled Resolves: https://issues.redhat.com/browse/MTV-1150 Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Quality Gate passedIssues Measures |
The new patch I just pushed does not yet address the issue I discussed in this comment: #969 (comment) |
@jonner yeah and there's even more basic scenario that we don't handle so well -
honestly, we were not really sure whether it makes sense to add this job back then because the only case we've heard of about the VDDK image was when someone created it on a filesystem that doesn't support symbolic links.. but apparently, we see that this job actually detects issues in the field and in that case it makes less sense to expect from users to clone the plan or archive-and-recreate the plan in order to initiate another job... specifically about what you wrote above, |
pods := &core.PodList{} | ||
if err = ctx.Destination.Client.List(context.TODO(), pods, &client.ListOptions{ | ||
Namespace: plan.Spec.TargetNamespace, | ||
LabelSelector: k8slabels.SelectorFromSet(map[string]string{"job-name": job.Name}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, now this call would work in all scenarios but I think it should be placed elsewhere since in many reconcile-cycles we'll find the job in JobComplete
and won't need the information from the pod
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #969 +/- ##
==========================================
- Coverage 16.31% 16.27% -0.04%
==========================================
Files 103 103
Lines 19386 19423 +37
==========================================
Hits 3162 3162
- Misses 15943 15980 +37
Partials 281 281
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
It's true that the more basic scenario has those issues, but at least it was consistent before: It failed with
Well, you know the code better than I do, but in my testing, the pod related to the job doesn't seem to be retained after the job completes. Specifically, after the job completes, the call to |
yeah, I agree
unless I'm missing something, I read the changes in this PR as introducing logic that says: "once the job completes, we check the status of the pod that was triggered for the job to figure out the reason for a failure", so it assumes the pod exists also when the job is completed with a failure, right? |
Not quite. What happens is that while the job is running, At some point, the image will either be pulled (and the pod will start running), or the pod will time out waiting for the image. In the first scenario, the job should run to completion and either succeed or fail based on the validation result. In the second scenario, the job will fail. Immediately after the job finishes (either successfully or due to timing out), the next call to But the above description does not apply to a cached job. In my experience, there are no pods returned for a cached job, so there's no way to determine the failure reason in this case. But maybe I am simply doing something wrong and there is a way to get pods for a completed job that I don't know about? |
@jonner I've been investigating the issue with pod deletion, and it turns out that it happens only when the pod is not fully created. If the VDDK image is invalid (the pod is unable to pull the image) and the job reaches the After discussing this with @ahadas , we believe the best approach is to remove the What do you think of this solution? |
Interesting idea. There are some details of your proposed solution that aren't yet 100% clear in my mind, but I will investigate. It sounds promising. |
A validation job is labeled with both the UID of the plan as well as the md5sum of the vddk init image URL. So with the above approach the following statement is not actually true:
When the vddk url is updated for the source provider, the next reconcile will look up a Job associated with the given plan and the new vddk URL, and will find none. So the old job will continue running (trying to pull the old init image), but we will ignore it. Instead, we'll create a new Job for the new plan/URL combination and run that. So I think it'll require some slightly more complicated job management to handle this situation. |
There are multiple cases that can lead to a "VDDK Init image is invalid" error message for a migration plan. They are currently handled with a single VDDKInvalid condition.
This patch adds a new error condition VDDKImageNotFound (and a new advisory condition VDDKImageNotReady) to help diagnose an issue of a missing image or incorrect image url.
Resolves: https://issues.redhat.com/browse/MTV-1150