Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to start container after update **podman 3.1.2-1.fc34 -> 3.2.0-0.1.rc1.fc34** #10274

Closed
ma3yta opened this issue May 8, 2021 · 11 comments · Fixed by #10288
Closed

Unable to start container after update **podman 3.1.2-1.fc34 -> 3.2.0-0.1.rc1.fc34** #10274

ma3yta opened this issue May 8, 2021 · 11 comments · Fixed by #10288
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@ma3yta
Copy link

ma3yta commented May 8, 2021

Describe the bug
Unable to start container after update podman 3.1.2-1.fc34 -> 3.2.0-0.1.rc1.fc34

[ma3yta@localhost ~]$ podman start --attach dotnet --log-level debug (Click to expand)
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called start.PersistentPreRunE(podman start --attach dotnet --log-level debug) 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf" 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/ma3yta/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/home/ma3yta/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /var/home/ma3yta/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /var/home/ma3yta/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Not configuring container store              
DEBU[0000] Initializing event backend journald          
DEBU[0000] configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument 
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
DEBU[0000] Default CNI network name podman is unchangeable 
INFO[0000] Setting parallel job count to 25             
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called start.PersistentPreRunE(podman start --attach dotnet --log-level debug) 
DEBU[0000] overlay storage already configured with a mount-program 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf" 
DEBU[0000] overlay storage already configured with a mount-program 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/ma3yta/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/home/ma3yta/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /var/home/ma3yta/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /var/home/ma3yta/.local/share/containers/storage/volumes 
DEBU[0000] overlay storage already configured with a mount-program 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=btrfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument 
DEBU[0000] configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/crun"            
DEBU[0000] Default CNI network name podman is unchangeable 
INFO[0000] Setting parallel job count to 25             
DEBU[0000] overlay: mount_data=,lowerdir=/var/home/ma3yta/.local/share/containers/storage/overlay/l/56QB4BQZ3IUKLISNMQW3Q4337D:/var/home/ma3yta/.local/share/containers/storage/overlay/l/YQU2LM4PUIJC5HZBXPALDHS5CB,upperdir=/var/home/ma3yta/.local/share/containers/storage/overlay/821be7b681919e10e491f72dd4cb2738860d609bfffa10ac88c652558a203070/diff,workdir=/var/home/ma3yta/.local/share/containers/storage/overlay/821be7b681919e10e491f72dd4cb2738860d609bfffa10ac88c652558a203070/work,context="system_u:object_r:container_file_t:s0:c460,c465" 
DEBU[0000] mounted container "b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb" at "/var/home/ma3yta/.local/share/containers/storage/overlay/821be7b681919e10e491f72dd4cb2738860d609bfffa10ac88c652558a203070/merged" 
DEBU[0000] Created root filesystem for container b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb at /var/home/ma3yta/.local/share/containers/storage/overlay/821be7b681919e10e491f72dd4cb2738860d609bfffa10ac88c652558a203070/merged 
DEBU[0000] Workdir "/" resolved to host path "/var/home/ma3yta/.local/share/containers/storage/overlay/821be7b681919e10e491f72dd4cb2738860d609bfffa10ac88c652558a203070/merged" 
DEBU[0000] Not modifying container b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb /etc/passwd 
DEBU[0000] Not modifying container b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb /etc/group 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription 
DEBU[0000] Setting CGroups for container b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb to user.slice:libpod:b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb 
DEBU[0000] set root propagation to "rslave"             
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Created OCI spec for container b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb at /var/home/ma3yta/.local/share/containers/storage/overlay-containers/b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb -u b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb -r /usr/bin/crun -b /var/home/ma3yta/.local/share/containers/storage/overlay-containers/b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb/userdata -p  -n dotnet --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l k8s-file:/var/home/ma3yta/.local/share/containers/storage/overlay-containers/b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/home/ma3yta/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg error --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb]"
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb.scope 
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0000] Received: -1
DEBU[0000] Cleaning up container b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb
DEBU[0000] Network is already cleaned up, skipping...
DEBU[0000] unmounted container "b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb"
Error: unable to start container b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb: opening file `` for writing: No such file or directory: OCI not found

Expected behaviour
Containers works fine after update podman 3.1.2-1.fc34 -> 3.2.0-0.1.rc1.fc34

Actual behaviour

 podman start --attach dotnet --log-level debug
 Error: unable to start container b2a64907dbefb2a22240b94bc88bd0c92540895b7a392370efa0fc7dc601dceb: opening file `` for writing: No such file or directory: OCI not found

Works fine when create a new container but doesn't work for old containers.

Output of toolbox --version (v0.0.90+)
toolbox version 0.0.99.1

Toolbox package info (rpm -q toolbox)
toolbox-0.0.99.1-1.fc34.x86_64

Output of podman version

Version:      3.2.0-rc1
API Version:  3.2.0-rc1
Go Version:   go1.16.3
Built:        Thu May  6 04:03:46 2021
OS/Arch:      linux/amd64

Podman package info (rpm -q podman)
podman-3.2.0-0.1.rc1.fc34.x86_64

Info about your OS
Fedora Silverblue 34

/kind bug

@NomadicBits
Copy link

For a quick workaround dnf downgrade podman installed and worked just fine for me

@mheon
Copy link
Member

mheon commented May 9, 2021 via email

@ma3yta
Copy link
Author

ma3yta commented May 9, 2021

all my containers don't work created from base image registry.fedoraproject.org/fedora-toolbox:34.
created using toolbox version 0.0.99.1 / podman version 3.1.2 with command toolbox create container_1 on Fedora Silverblue v34

@ma3yta ma3yta changed the title Unable to start container after update **podman 2:3.1.2-1.fc34 -> 2:3.2.0-0.1.rc1.fc34** Unable to start container after update **podman 3.1.2-1.fc34 -> 3.2.0-0.1.rc1.fc34** May 9, 2021
@NomadicBits
Copy link

NomadicBits commented May 9, 2021

Playing around to make a reproducer, it looks like the upgrade fails all current running containers

dnf downgrade podman
podman run -d --name test-haproxy -v /mnt/hdd/haproxy/config:/usr/local/etc/haproxy:ro,z haproxy:lts-alpine
podman stop test-haproxy
dnf upgrade podman
podman start test-haproxy

Error: unable to start container "2ef94cdf56df71acae6fa7c4614fe943e934869bab37894978b181044f8f9969": opening file `` for writing: No such file or directory: OCI not found

I should work with any container that was started before the upgrade it looks like

@Blfrg
Copy link

Blfrg commented May 10, 2021

Note: dnf isn't availble on the host OS (Silverblue) only within the toolbox/podman containers.
The details below are specific to Silverblue implementation (e.g. rpm-ostree)

I've confirmed the same issue on:

ostree://fedora:fedora/34/x86_64/silverblue
Version: 34.20210508.0 (2021-05-08T01:04:19Z)

podman version 3.2.0-rc1
toolbox version 0.0.99.1

Existing pods result in the error described above.
New pods can be created and started properly.


I rolled back the OS update rpm-ostree rollback :
ostree://fedora:fedora/34/x86_64/silverblue
Version: 34.20210507.0 (2021-05-07T00:45:36Z)

podman version 3.1.2
toolbox version 0.0.99.1

The existing pods can be started again.
This is sufficient to workaround the issue for now.

If you need to rollback further than the 'previous' version (you're currently on 34.20210509.0 or later)
the following will downgrade to the latest Silverblue version which has podman 3.1.2 (conflict free):
rpm-ostree deploy 34.20210507.0

@cheese
Copy link

cheese commented May 10, 2021

@Luap99 Luap99 self-assigned this May 10, 2021
@Luap99 Luap99 added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label May 10, 2021
Luap99 pushed a commit to Luap99/libpod that referenced this issue May 10, 2021
Commit 728b73d introduced a regression. Containers created with a
previous version do no longer start successfully. The problem is that
the PidFile in the container config is empty for those containers. If
the PidFile is empty we have to set it to the previous default.

[NO TESTS NEEDED] We should investigate why the system upgrade test did
not caught this.

Fixes containers#10274

Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
@Luap99
Copy link
Member

Luap99 commented May 10, 2021

PR to fix #10288

jlebon added a commit to jlebon/fedora-coreos-config that referenced this issue May 10, 2021
There is a regression in the latest 3.2.0 RC in stable:
- containers/podman#10274
- containers/podman#10288

Also hit that bug myself locally on FSB.
jlebon added a commit to jlebon/fedora-coreos-config that referenced this issue May 10, 2021
There is a regression in the latest 3.2.0 RC in stable:
- containers/podman#10274
- containers/podman#10288

No Bodhi updates yet with the fix.

Also hit that bug myself locally on FSB.
jlebon added a commit to jlebon/fedora-coreos-config that referenced this issue May 10, 2021
There is a regression in the latest 3.2.0 RC in stable:
- containers/podman#10274
- containers/podman#10288

No Bodhi updates yet with the fix.

Also hit that bug myself locally on FSB.
dustymabe pushed a commit to coreos/fedora-coreos-config that referenced this issue May 11, 2021
There is a regression in the latest 3.2.0 RC in stable:
- containers/podman#10274
- containers/podman#10288

No Bodhi updates yet with the fix.

Also hit that bug myself locally on FSB.
jlebon added a commit to jlebon/fedora-coreos-config that referenced this issue May 12, 2021
There is a regression in the latest 3.2.0 RC in stable:
- containers/podman#10274
- containers/podman#10288

No Bodhi updates yet with the fix.

Also hit that bug myself locally on FSB.

(cherry picked from commit b72844c)
jlebon added a commit to jlebon/fedora-coreos-config that referenced this issue May 12, 2021
There is a regression in the latest 3.2.0 RC in stable:
- containers/podman#10274
- containers/podman#10288

No Bodhi updates yet with the fix.

Also hit that bug myself locally on FSB.

(cherry picked from commit b72844c)
@anthr76
Copy link

anthr76 commented Jun 11, 2021

Looks like this regression might be back?

podman version 3.2.0 can't start my toolbox anymore.. rolling back now...

Indeed same exact behavior previously mentioned rolled-back to podman version 3.1.2

@Blfrg
Copy link

Blfrg commented Jun 13, 2021

I just tested with the latest SilverBlue available today

ostree://fedora:fedora/34/x86_64/silverblue
Version: 34.20210613.0 (2021-06-13T01:31:09Z)

Which installs

[user@fedora ~]$ rpmquery podman
podman-3.2.0-5.fc34.x86_64

I am not experiencing any issue with my existing container(s):

[user@fedora ~]$ toolbox enter
⬢[user@toolbox ~]$ toolbox --version
toolbox version 0.0.99.1

If there was any regression [a couple days ago] maybe it has been resolved already.

@anthr76
Copy link

anthr76 commented Jun 14, 2021

Have you tried using a re-producer above? Usually you needed to specify a custom toolbox image for it to break..

@Blfrg
Copy link

Blfrg commented Jun 14, 2021

@anthr76 I followed the same steps as the previous time I experienced this issue:

  • update to current SilverBlue
  • Try toolbox enter (default or named container)
  • experience error

Per your request I have tested with an additional existing [named] container:

[user@fedora ~]$ rpm-ostree status
State: idle
Deployments:
● ostree://fedora:fedora/34/x86_64/silverblue
                   Version: 34.20210614.0 (2021-06-14T00:44:31Z)
                    Commit: 4ca77cb02d285dc852964434424229a448ad1bf4c802e62c2985091a2c057228
              GPGSignature: Valid signature by 8C5BA6990BDB26E19F2A1A801161AE6945719A39

  ostree://fedora:fedora/34/x86_64/silverblue
                   Version: 34.20210613.0 (2021-06-13T01:31:09Z)
                    Commit: f436b1546125ddc928a74d513d8f515a1ba2bae13fc390080f0bc22a55dbca3d
              GPGSignature: Valid signature by 8C5BA6990BDB26E19F2A1A801161AE6945719A39
[user@fedora ~]$ podman --version
podman version 3.2.0
[user@fedora ~]$ rpmquery podman
podman-3.2.0-5.fc34.x86_64
[user@fedora ~]$ toolbox list
IMAGE ID      IMAGE NAME                                    CREATED
9c649cf455d4  registry.fedoraproject.org/fedora-toolbox:34  7 weeks ago

CONTAINER ID  CONTAINER NAME     CREATED      STATUS  IMAGE NAME
f1f75eba1ab0  fedora-toolbox-34  6 weeks ago  exited  registry.fedoraproject.org/fedora-toolbox:34
9b0f5789a9d5  test               6 weeks ago  exited  registry.fedoraproject.org/fedora-toolbox:34
8ad679c3cf30  test2              5 weeks ago  exited  registry.fedoraproject.org/fedora-toolbox:34
[user@fedora ~]$ toolbox -c test enter
⬢[user@toolbox ~]$ toolbox --version
toolbox version 0.0.99.1

I did not experience an issue with the named container.

Though, do note, it's a new day, and a new version of SilverBlue (34.20210614.0)
It possible you had the issue on a version that's a few days older than the version I'm testing on.
Please also be sure you haven't used rpm-ostree override replace and installed the problematic version

Please run the set of commands above and confirm your:

  • SilverBlue version: rpm-ostree status
  • [full] podman version: rpmquery podman
  • and error output if needed

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants