Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use create_namespaced_deployment_rollback in Kubernetes 1.16+ #1232

Closed
bjaworski3 opened this issue Aug 12, 2020 · 16 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@bjaworski3
Copy link

bjaworski3 commented Aug 12, 2020

What happened (please include outputs or screenshots):
Got a 404 error from the Kubernetes API due to the fact that both ExtensionsV1beta1Api and AppsV1beta1Api are deprecated in Kubernetes 1.16

The recently released 12.0.0a1 is supposed to have Kubernetes 1.16 support, but also doesn't have rollback with the proper API AppsV1Api

https://github.com/kubernetes-client/python/blob/v12.0.0a1/kubernetes/README.md

File "/usr/lib/python3.8/site-packages/kubernetes/client/apis/apps_v1beta1_api.py", line 291, in create_namespaced_deployment_rollback
    (data) = self.create_namespaced_deployment_rollback_with_http_info(name, namespace, body, **kwargs)
  File "/usr/lib/python3.8/site-packages/kubernetes/client/apis/apps_v1beta1_api.py", line 375, in create_namespaced_deployment_rollback_with_http_info
    return self.api_client.call_api('/apis/apps/v1beta1/namespaces/{namespace}/deployments/{name}/rollback', 'POST',
  File "/usr/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 330, in call_api
    return self.__call_api(resource_path, method,
  File "/usr/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 163, in __call_api
    response_data = self.request(method, url,
  File "/usr/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 371, in request
    return self.rest_client.POST(url,
  File "/usr/lib/python3.8/site-packages/kubernetes/client/rest.py", line 260, in POST
    return self.request("POST", url,
  File "/usr/lib/python3.8/site-packages/kubernetes/client/rest.py", line 222, in request
    raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (404)
Reason: Not Found

What you expected to happen:
There should be a create_namespaced_deployment_rollback in the AppsV1Api Class so that it uses the non-deprecated API

How to reproduce it (as minimally and precisely as possible):
Attempt to rollback a deployment with either API against a cluster running 1.16:

api_response = client.AppsV1beta1Api().create_namespaced_deployment_rollback(
  name=deployment_name,
  namespace=namespace,
  body=body,
  _preload_content=False)

OR

api_response = client.ExtensionsV1beta1Api().create_namespaced_deployment_rollback(
  name=deployment_name,
  namespace=namespace,
  body=body,
  _preload_content=False)

Environment:

  • Kubernetes version (kubectl version): 1.16.11
  • Python version (python --version) 3.8.5
  • Python client version (pip list | grep kubernetes) 12.0.0a1
@bjaworski3 bjaworski3 added the kind/bug Categorizes issue or PR as related to a bug. label Aug 12, 2020
@roycaihw
Copy link
Member

There should be a create_namespaced_deployment_rollback in the AppsV1Api Class

There is no deployment rollback endpoint in apps/v1 API: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#deployment-v1-apps.

@pigletfly
Copy link

@roycaihw should we drop support for ExtensionsV1beta1Api and AppsV1beta1Api in 12.0 release? I'd like to contribute to this.

@roycaihw
Copy link
Member

@pigletfly 12.0 release corresponds to Kubernetes 1.16, which still has those API groups.

@tarunwadhwa13
Copy link

Any update on this? Rollback is indeed an important feature. Release 12 is still in beta. These updates can be incorporated I believe

@AlexIoannides
Copy link

An interim solution that may help others who stumble upon this issue:

from kubernetes import client as k8s, config as k8s_config

k8s_config.load_kube_config()


def rollback_deployment(deployment: k8s.V1Deployment) -> None:
    """Rollback a deployment to its previous version.

    :param deployment: A configured deployment object.
    """
    name = deployment.metadata.name
    namespace = deployment.metadata.namespace

    associated_replica_sets = k8s.AppsV1Api().list_namespaced_replica_set(
        namespace=namespace,
        label_selector=f'app={deployment.spec.template.metadata.labels["app"]}'
    )

    revision_ordered_replica_sets = sorted(
        associated_replica_sets.items,
        key=lambda e: e.metadata.annotations['deployment.kubernetes.io/revision'],
        reverse=True
    )

    rollback_replica_set = (
        revision_ordered_replica_sets[0]
        if len(revision_ordered_replica_sets) == 1
        else revision_ordered_replica_sets[1]
    )

    rollback_revision_number = (
        rollback_replica_set
        .metadata
        .annotations['deployment.kubernetes.io/revision']
    )

    patch = [
        {
            'op': 'replace',
            'path': '/spec/template',
            'value': rollback_replica_set.spec.template
        },
        {
            'op': 'replace',
            'path': '/metadata/annotations',
            'value': {
                'deployment.kubernetes.io/revision': rollback_revision_number,
                **deployment.metadata.annotations
            }
        }
    ]

    k8s.AppsV1Api().patch_namespaced_deployment(
        body=patch,
        name=name,
        namespace=namespace
    )

Basically, I've reverse-engineered what kubectl rollout unto ... appears to be doing.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 4, 2021
@NyxCampbell
Copy link

Hello, I am currently running into this issue. Does anyone know if it was resolved, or will I be using the interim solution?

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 21, 2021
@ghost
Copy link

ghost commented Apr 12, 2021

It is still an issue.

@ghost
Copy link

ghost commented Apr 12, 2021

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 12, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 11, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 10, 2021
@emiletab
Copy link

emiletab commented Aug 18, 2021

An interim solution that may help others who stumble upon this issue:

from kubernetes import client as k8s, config as k8s_config

k8s_config.load_kube_config()


def rollback_deployment(deployment: k8s.V1Deployment) -> None:
    """Rollback a deployment to its previous version.

    :param deployment: A configured deployment object.
    """
    name = deployment.metadata.name
    namespace = deployment.metadata.namespace

    associated_replica_sets = k8s.AppsV1Api().list_namespaced_replica_set(
        namespace=namespace,
        label_selector=f'app={deployment.spec.template.metadata.labels["app"]}'
    )

    revision_ordered_replica_sets = sorted(
        associated_replica_sets.items,
        key=lambda e: e.metadata.annotations['deployment.kubernetes.io/revision'],
        reverse=True
    )

    rollback_replica_set = (
        revision_ordered_replica_sets[0]
        if len(revision_ordered_replica_sets) == 1
        else revision_ordered_replica_sets[1]
    )

    rollback_revision_number = (
        rollback_replica_set
        .metadata
        .annotations['deployment.kubernetes.io/revision']
    )

    patch = [
        {
            'op': 'replace',
            'path': '/spec/template',
            'value': rollback_replica_set.spec.template
        },
        {
            'op': 'replace',
            'path': '/metadata/annotations',
            'value': {
                'deployment.kubernetes.io/revision': rollback_revision_number,
                **deployment.metadata.annotations
            }
        }
    ]

    k8s.AppsV1Api().patch_namespaced_deployment(
        body=patch,
        name=name,
        namespace=namespace
    )

Basically, I've reverse-engineered what kubectl rollout unto ... appears to be doing.

@AlexIoannides I would recommend casting the metadata.annotations['deployment.kubernetes.io/revision'] to integer in the sorted method, because as soon as you have more than 9 replicasets you will get an inaccurate results because you will be comparing string '10' vs '7'

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Mubangizi
Copy link

Any update on this Issue?

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests