We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug Kubectl doesn't make it possible to delete a Job and have the delete cascade to the underlying pod. By default it appears to orphan the pod as described here: https://kubernetes.io/docs/concepts/workloads/controllers/job/#ttl-mechanism-for-finished-jobs.
Client Version 20.0.1
Kubernetes Version 1.29
Java Version Java 21
To Reproduce Create a job and then delete it using Kubectl:
Kubectl.delete(V1Job.class) .apiClient(client) .name(jobId) .namespace(namespace) .ignoreNotFound(true) .execute());
Expected behavior It should be possible to pass in .deleteOptions() and cascade or do foreground propagation.
.deleteOptions()
Additional context As a workaround we can use the batchApi:
V1Status v1Status = batchApi.deleteNamespacedJob(jobId, namespace).propagationPolicy("Foreground").execute();
The text was updated successfully, but these errors were encountered:
Feel free to send a PR to support this.
Sorry, something went wrong.
No branches or pull requests
Describe the bug
Kubectl doesn't make it possible to delete a Job and have the delete cascade to the underlying pod. By default it appears to orphan the pod as described here: https://kubernetes.io/docs/concepts/workloads/controllers/job/#ttl-mechanism-for-finished-jobs.
Client Version
20.0.1
Kubernetes Version
1.29
Java Version
Java 21
To Reproduce
Create a job and then delete it using Kubectl:
Expected behavior
It should be possible to pass in
.deleteOptions()
and cascade or do foreground propagation.Additional context
As a workaround we can use the batchApi:
The text was updated successfully, but these errors were encountered: