Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to override installation of a software package. #7968

Closed
vishumindtree opened this issue Sep 14, 2021 · 9 comments
Closed

How to override installation of a software package. #7968

vishumindtree opened this issue Sep 14, 2021 · 9 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@vishumindtree
Copy link

How can we override software installation version via extravars to avoid below software version conflicts :-

"msg": "'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install 'containerd.io=1.4.4-1' 'docker-ce-cli=5:19.03.153-0ubuntu-bionic' 'docker-ce=5:19.03.153-0ubuntu-bionic'' failed: E: Packages were downgraded and -y was used without --allow-downgrades.\n",
"rc": 100,
"stderr": "E: Packages were downgraded and -y was used without --allow-downgrades.\n",
"stderr_lines": [
"E: Packages were downgraded and -y was used without --allow-downgrades."
],
"stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following package was automatically installed and is no longer required:\n docker-scan-plugin\nUse 'sudo apt autoremove' to remove it.\nThe following packages will be upgraded:\n docker-ce docker-ce-cli\nThe following packages will be DOWNGRADED:\n containerd.io\n2 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 6 not upgraded.\n",
"stdout_lines": [
"Reading package lists...",
"Building dependency tree...",
"Reading state information...",
"The following package was automatically installed and is no longer required:",
" docker-scan-plugin",
"Use 'sudo apt autoremove' to remove it.",
"The following packages will be upgraded:",
" docker-ce docker-ce-cli",
"The following packages will be DOWNGRADED:",
" containerd.io",
"2 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 6 not upgraded."

@vishumindtree vishumindtree added the kind/support Categorizes issue or PR as a support question. label Sep 14, 2021
@oomichi
Copy link
Contributor

oomichi commented Sep 27, 2021

You can specify versions of containerd and docker-ce with containerd_version and docker_version on your inventory like

containerd_version: 1.4.9
docker_version: '20.10'

by adding the above lines into inventory/<my_cluster>/group_vars/k8s_cluster/k8s-cluster.yml for example.

@vishumindtree
Copy link
Author

I will check & let you know , if it works

@vishumindtree
Copy link
Author

After I made above changes , I am getting below errors :-

FAILED! => {
11:02:43 "msg": "The conditional check 'docker_package_info.pkgs|length > 0' failed. The error was: error while evaluating conditional (docker_package_info.pkgs|length > 0): {'pkgs': ['{{ containerd_versioned_pkg[containerd_version | string] }}', '{{ docker_cli_versioned_pkg[docker_cli_version | string] }}', '{{ docker_versioned_pkg[docker_version | string] }}']}: 'dict object' has no attribute '1.4.9'\n\nThe error appears to be in '/home/jenkins/agent/workspace/blossom/createCluster/kubespray/roles/container-engine/docker/tasks/main.yml': line 105, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: ensure docker packages are installed\n ^ here\n"

@oomichi
Copy link
Contributor

oomichi commented Dec 7, 2021

@vishumindtree containerd v1.4.9 support has been added since #7970 after you tried.
Please try the latest Kubespray if possible.

@vishumindtree
Copy link
Author

@oomichi It seems as if containerd v1.4.9 is hardcoded.
Anyways , When we are already installing docker-ce , why do we need to install containerd separately.
containerd gets installed as part of installation of docker-ce itself.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 23, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 22, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants