Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distinguish between different VDDK validation errors #969

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jonner
Copy link
Contributor

@jonner jonner commented Jul 23, 2024

There are multiple cases that can lead to a "VDDK Init image is invalid" error message for a migration plan. They are currently handled with a single VDDKInvalid condition.

This patch adds a new error condition VDDKImageNotFound (and a new advisory condition VDDKImageNotReady) to help diagnose an issue of a missing image or incorrect image url.

Resolves: https://issues.redhat.com/browse/MTV-1150

Copy link
Member

@ahadas ahadas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @jonner for taking that issue and posting this fix!

pkg/controller/plan/validation.go Outdated Show resolved Hide resolved
pkg/controller/plan/validation.go Show resolved Hide resolved
@jonner
Copy link
Contributor Author

jonner commented Jul 24, 2024

I just realized that this is probably not a full fix.

Example scenario:

  1. I enter a URL A for the vddk image and create a migration plan with this provider.
    • A validation job is started for this vddk init image
    • the image cannot be pulled (for whatever reason)
    • with the patch above, the validation job will indicate a VDDKImageNotFound error.
  2. I change the vddk image in the provider to a different url B
    • Due to the vddk url change, we kick off another validation job
    • Results the same as above
  3. Now I change the URL back to A
    • due to the vddk url change, we try to validate the vddk init image again, so we try to ensure a validation job exists
    • we find the cached job from step 1 that has the same labels, we don't create a new one and thus no validator pod is started and we don't try to re-pull the init image
    • since no new job is started, there are no pods to query about InitContainerStatuses, and the only thing we know is that the job failed
    • we now report VDDKInvalid instead of VDDKImageNotFound

There are multiple cases that can lead to a "VDDK Init image is invalid"
error message for a migration plan. They are currently handled with a
single VDDKInvalid condition.

This patch adds a new error condition VDDKImageNotFound (and a new
advisory condition VDDKImageNotReady) to help diagnose an issue of a
missing image or incorrect image url.

When the initContainer cannot pull the VDDK init image, the
vddk-validator-* pod has something like the following status:
  initContainerStatuses:
    - name: vddk-side-car
      state:
        waiting:
          reason: ErrImagePull
          message: 'reading manifest 8.0.3.14 in default-route-openshift-image-registry.apps-crc.testing/openshift/vddk: manifest unknown'
      lastState: {}
      ready: false
      restartCount: 0
      image: 'default-route-openshift-image-registry.apps-crc.testing/openshift/vddk:8.0.3.14'
      imageID: ''
      started: false

So we use the existence of the 'waiting' state to indicate that the
image cannot be pulled

Resolves: https://issues.redhat.com/browse/MTV-1150

Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
Copy link

sonarcloud bot commented Jul 24, 2024

@jonner
Copy link
Contributor Author

jonner commented Jul 24, 2024

The new patch I just pushed does not yet address the issue I discussed in this comment: #969 (comment)

@ahadas
Copy link
Member

ahadas commented Jul 25, 2024

@jonner yeah and there's even more basic scenario that we don't handle so well -

  1. we try to validate URL A
  2. the job fails on pulling the image
  3. we sort out the issue with pulling the image from URL A
    -> the job needs to be recreated in order to validate A again

honestly, we were not really sure whether it makes sense to add this job back then because the only case we've heard of about the VDDK image was when someone created it on a filesystem that doesn't support symbolic links.. but apparently, we see that this job actually detects issues in the field and in that case it makes less sense to expect from users to clone the plan or archive-and-recreate the plan in order to initiate another job...

specifically about what you wrote above, since no new job is started, there are no pods to query about InitContainerStatuses - wouldn't we still have the pod that ran before?

pkg/controller/plan/validation.go Show resolved Hide resolved
pkg/controller/plan/validation.go Show resolved Hide resolved
pods := &core.PodList{}
if err = ctx.Destination.Client.List(context.TODO(), pods, &client.ListOptions{
Namespace: plan.Spec.TargetNamespace,
LabelSelector: k8slabels.SelectorFromSet(map[string]string{"job-name": job.Name}),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool, now this call would work in all scenarios but I think it should be placed elsewhere since in many reconcile-cycles we'll find the job in JobComplete and won't need the information from the pod

Copy link

codecov bot commented Jul 25, 2024

Codecov Report

Attention: Patch coverage is 0% with 37 lines in your changes missing coverage. Please review.

Project coverage is 16.27%. Comparing base (611fee6) to head (a79d59f).
Report is 10 commits behind head on main.

Files Patch % Lines
pkg/controller/plan/validation.go 0.00% 37 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #969      +/-   ##
==========================================
- Coverage   16.31%   16.27%   -0.04%     
==========================================
  Files         103      103              
  Lines       19386    19423      +37     
==========================================
  Hits         3162     3162              
- Misses      15943    15980      +37     
  Partials      281      281              
Flag Coverage Δ
unittests 16.27% <0.00%> (-0.04%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jonner
Copy link
Contributor Author

jonner commented Jul 25, 2024

@jonner yeah and there's even more basic scenario that we don't handle so well -

1. we try to validate URL `A`

2. the job fails on pulling the image

3. we sort out the issue with pulling the image from URL `A`
   -> the job needs to be recreated in order to validate A again

honestly, we were not really sure whether it makes sense to add this job back then because the only case we've heard of about the VDDK image was when someone created it on a filesystem that doesn't support symbolic links.. but apparently, we see that this job actually detects issues in the field and in that case it makes less sense to expect from users to clone the plan or archive-and-recreate the plan in order to initiate another job...

It's true that the more basic scenario has those issues, but at least it was consistent before: It failed with VDDKInvalid on the first run, and any subsequent changes didn't change that. But in the scenario I described, it would fail with VDDKImageNotFound on the first run and then any subsequent checks would report VDDKInvalid, which adds confusion, I think.

specifically about what you wrote above, since no new job is started, there are no pods to query about InitContainerStatuses - wouldn't we still have the pod that ran before?

Well, you know the code better than I do, but in my testing, the pod related to the job doesn't seem to be retained after the job completes. Specifically, after the job completes, the call to cxt.Destination.Client.List() returns no pods. Should the pod still be available somewhere?

@ahadas
Copy link
Member

ahadas commented Jul 29, 2024

@jonner yeah and there's even more basic scenario that we don't handle so well -

1. we try to validate URL `A`

2. the job fails on pulling the image

3. we sort out the issue with pulling the image from URL `A`
   -> the job needs to be recreated in order to validate A again

honestly, we were not really sure whether it makes sense to add this job back then because the only case we've heard of about the VDDK image was when someone created it on a filesystem that doesn't support symbolic links.. but apparently, we see that this job actually detects issues in the field and in that case it makes less sense to expect from users to clone the plan or archive-and-recreate the plan in order to initiate another job...

It's true that the more basic scenario has those issues, but at least it was consistent before: It failed with VDDKInvalid on the first run, and any subsequent changes didn't change that. But in the scenario I described, it would fail with VDDKImageNotFound on the first run and then any subsequent checks would report VDDKInvalid, which adds confusion, I think.

yeah, I agree

specifically about what you wrote above, since no new job is started, there are no pods to query about InitContainerStatuses - wouldn't we still have the pod that ran before?

Well, you know the code better than I do, but in my testing, the pod related to the job doesn't seem to be retained after the job completes. Specifically, after the job completes, the call to cxt.Destination.Client.List() returns no pods. Should the pod still be available somewhere

unless I'm missing something, I read the changes in this PR as introducing logic that says: "once the job completes, we check the status of the pod that was triggered for the job to figure out the reason for a failure", so it assumes the pod exists also when the job is completed with a failure, right?
it could be that there's cleanup of the job's completed pods at some point (not by us but by k8s/OpenShift), I think I also saw it happening (having a completed job without associates pods), but I was puzzled by that part that suggests it's always the case that completed jobs have no associated pods

@jonner
Copy link
Contributor Author

jonner commented Jul 29, 2024

unless I'm missing something, I read the changes in this PR as introducing logic that says: "once the job completes, we check the status of the pod that was triggered for the job to figure out the reason for a failure", so it assumes the pod exists also when the job is completed with a failure, right?

Not quite. What happens is that while the job is running, validateVddkImage() checks the vddk-validator-* pod associated with the job and sets the condition to VDDKImageNotReady if the pod is waiting.

At some point, the image will either be pulled (and the pod will start running), or the pod will time out waiting for the image. In the first scenario, the job should run to completion and either succeed or fail based on the validation result. In the second scenario, the job will fail.

Immediately after the job finishes (either successfully or due to timing out), the next call to validateVddkImage() will not find a pod associated with that job (in my testing cxt.Destination.Client.List() returns no pods at this point). If the job has a JobFailed condition, the only indication about why the job failed is checking whether I have set a VDDKImageNotReady status condition on the plan during the previous reconcile (this is the reason that I made the VDDKImageNotReady condition durable: so after the pod disappears, the next reconcile still has a hint about what might have caused the job to fail).

But the above description does not apply to a cached job. In my experience, there are no pods returned for a cached job, so there's no way to determine the failure reason in this case.

But maybe I am simply doing something wrong and there is a way to get pods for a completed job that I don't know about?

@bkhizgiy
Copy link
Member

@jonner I've been investigating the issue with pod deletion, and it turns out that it happens only when the pod is not fully created. If the VDDK image is invalid (the pod is unable to pull the image) and the job reaches the VDDK_JOB_ACTIVE_DEADLINE limit, the job fails, and the pod is deleted. However, if the job is successful, the pod remains as it is. Adding additional configuration to the job spec won't assist in this case.

After discussing this with @ahadas , we believe the best approach is to remove the VDDK_JOB_ACTIVE_DEADLINE limit from the job and handle it at the plan level. We'll add a condition to the plan controller to check if the job is still running and if the time exceeds the VDDK_JOB_ACTIVE_DEADLINE parameter. If it does, we will propagate the pod status to the plan without failing the job and notify the user of the issue. This approach also addresses another issue: since the job remains running, if the user updates the VDDK URL with a correct one, the job should successfully complete under the same job and plan.

What do you think of this solution?

@jonner
Copy link
Contributor Author

jonner commented Jul 31, 2024

Interesting idea. There are some details of your proposed solution that aren't yet 100% clear in my mind, but I will investigate. It sounds promising.

@jonner
Copy link
Contributor Author

jonner commented Aug 7, 2024

After discussing this with @ahadas , we believe the best approach is to remove the VDDK_JOB_ACTIVE_DEADLINE limit from the job and handle it at the plan level. We'll add a condition to the plan controller to check if the job is still running and if the time exceeds the VDDK_JOB_ACTIVE_DEADLINE parameter. If it does, we will propagate the pod status to the plan without failing the job and notify the user of the issue.

A validation job is labeled with both the UID of the plan as well as the md5sum of the vddk init image URL. So with the above approach the following statement is not actually true:

This approach also addresses another issue: since the job remains running, if the user updates the VDDK URL with a correct one, the job should successfully complete under the same job and plan.

When the vddk url is updated for the source provider, the next reconcile will look up a Job associated with the given plan and the new vddk URL, and will find none. So the old job will continue running (trying to pull the old init image), but we will ignore it. Instead, we'll create a new Job for the new plan/URL combination and run that.

So I think it'll require some slightly more complicated job management to handle this situation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants