-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Network using podman in podman #22791
Comments
Can you add |
Sure: logs[root@fb041043307f /]# podman run -it --net a --log-level debug alpine hostname
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman run -it --net a --log-level debug alpine hostname)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: imagestore=/var/lib/shared
DEBU[0000] overlay: imagestore=/usr/lib/containers/storage
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 49
DEBU[0000] Pulling image alpine (policy: missing)
DEBU[0000] Looking up image "alpine" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf"
DEBU[0000] Trying "docker.io/library/alpine:latest" ...
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]docker.io/library/alpine:latest" does not resolve to an image ID
DEBU[0000] Trying "localhost/alpine:latest" ...
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]localhost/alpine:latest" does not resolve to an image ID
DEBU[0000] Trying "registry.fedoraproject.org/alpine:latest" ...
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]registry.fedoraproject.org/alpine:latest" does not resolve to an image ID
DEBU[0000] Trying "registry.access.redhat.com/alpine:latest" ...
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]registry.access.redhat.com/alpine:latest" does not resolve to an image ID
DEBU[0000] Trying "docker.io/library/alpine:latest" ...
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]docker.io/library/alpine:latest" does not resolve to an image ID
DEBU[0000] Trying "quay.io/alpine:latest" ...
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]quay.io/alpine:latest" does not resolve to an image ID
DEBU[0000] Trying "docker.io/library/alpine:latest" ...
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]docker.io/library/alpine:latest" does not resolve to an image ID
DEBU[0000] Trying "alpine" ...
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Attempting to pull candidate docker.io/library/alpine:latest for alpine
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]docker.io/library/alpine:latest"
DEBU[0000] Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
DEBU[0000] Copying source image //alpine:latest to destination image [overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]docker.io/library/alpine:latest
DEBU[0000] Using registries.d directory /etc/containers/registries.d
DEBU[0000] Trying to access "docker.io/library/alpine:latest"
DEBU[0000] No credentials matching docker.io/library/alpine found in /run/containers/0/auth.json
DEBU[0000] No credentials matching docker.io/library/alpine found in /root/.config/containers/auth.json
DEBU[0000] No credentials matching docker.io/library/alpine found in /root/.docker/config.json
DEBU[0000] No credentials matching docker.io/library/alpine found in /root/.dockercfg
DEBU[0000] No credentials for docker.io/library/alpine found
DEBU[0000] No signature storage configuration found for docker.io/library/alpine:latest, using built-in default file:///var/lib/containers/sigstore
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io
DEBU[0000] GET https://registry-1.docker.io/v2/
DEBU[0000] Ping https://registry-1.docker.io/v2/ status 401
DEBU[0000] GET https://auth.docker.io/token?scope=repository%3Alibrary%2Falpine%3Apull&service=registry.docker.io
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/manifests/latest
DEBU[0001] Content-Type from manifest GET is "application/vnd.docker.distribution.manifest.list.v2+json"
DEBU[0001] Using SQLite blob info cache at /var/lib/containers/cache/blob-info-cache-v1.sqlite
DEBU[0001] Source is a manifest list; copying (only) instance sha256:216266c86fc4dcef5619930bd394245824c2af52fd21ba7c6fa0e618657d4c3b for current system
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/manifests/sha256:216266c86fc4dcef5619930bd394245824c2af52fd21ba7c6fa0e618657d4c3b
DEBU[0001] Content-Type from manifest GET is "application/vnd.docker.distribution.manifest.v2+json"
DEBU[0001] IsRunningImageAllowed for image docker:docker.io/library/alpine:latest
DEBU[0001] Using default policy section
DEBU[0001] Requirement 0: allowed
DEBU[0001] Overall: allowed
DEBU[0001] Downloading /v2/library/alpine/blobs/sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/blobs/sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1
Getting image source signatures
DEBU[0001] Reading /var/lib/containers/sigstore/library/alpine@sha256=216266c86fc4dcef5619930bd394245824c2af52fd21ba7c6fa0e618657d4c3b/signature-1
DEBU[0001] Not looking for sigstore attachments: disabled by configuration
DEBU[0001] Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json]
DEBU[0001] ... will first try using the original manifest unmodified
DEBU[0001] Checking if we can reuse blob sha256:d25f557d7f31bf7acfac935859b5153da41d13c41f2b468d16f729a5b883634f: general substitution = true, compression for MIME type "application/vnd.docker.image.rootfs.diff.tar.gzip" = true
DEBU[0001] Failed to retrieve partial blob: convert_images not configured
DEBU[0001] Downloading /v2/library/alpine/blobs/sha256:d25f557d7f31bf7acfac935859b5153da41d13c41f2b468d16f729a5b883634f
DEBU[0001] GET https://registry-1.docker.io/v2/library/alpine/blobs/sha256:d25f557d7f31bf7acfac935859b5153da41d13c41f2b468d16f729a5b883634f
Copying blob d25f557d7f31 [--------------------------------------] 0.0b / 3.5MiB (skipped: 0.0b = 0.00%)
Copying blob d25f557d7f31 [--------------------------------------] 0.0b / 3.5MiB | 0.0 b/s
Copying blob d25f557d7f31 done |
Copying blob d25f557d7f31 done |
DEBU[0002] No compression detected
DEBU[0002] Compression change for blob sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1 ("application/vnd.docker.container.image.v1+json") not supported
DEBU[0002] Using original blob without modification
Copying config 1d34ffeaf1 done |
Writing manifest to image destination
DEBU[0002] setting image creation date to 2024-05-22 18:18:12.052034407 +0000 UTC
DEBU[0002] created new image ID "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" with metadata "{}"
DEBU[0002] added name "docker.io/library/alpine:latest" to image "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1"
DEBU[0002] Pulled candidate docker.io/library/alpine:latest successfully
DEBU[0002] Looking up image "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" in local containers storage
DEBU[0002] Trying "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" ...
DEBU[0002] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]@1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1"
DEBU[0002] Found image "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" as "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" in local containers storage
DEBU[0002] Found image "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" as "1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]@1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1)
DEBU[0002] exporting opaque data as blob "sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1"
DEBU[0002] Looking up image "alpine" in local containers storage
DEBU[0002] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0002] Trying "docker.io/library/alpine:latest" ...
DEBU[0002] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]@1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1"
DEBU[0002] Found image "alpine" as "docker.io/library/alpine:latest" in local containers storage
DEBU[0002] Found image "alpine" as "docker.io/library/alpine:latest" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.imagestore=/var/lib/shared,overlay.imagestore=/usr/lib/containers/storage,overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mountopt=nodev,fsync=0]@1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1)
DEBU[0002] exporting opaque data as blob "sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1"
DEBU[0002] Inspecting image 1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1
DEBU[0002] exporting opaque data as blob "sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1"
DEBU[0002] exporting opaque data as blob "sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1"
DEBU[0002] Inspecting image 1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1
DEBU[0002] Inspecting image 1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1
DEBU[0002] Inspecting image 1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1
DEBU[0002] using systemd mode: false
DEBU[0002] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0002] Successfully loaded network a: &{a 9ccb0856370ddfdea57b7bb9477701ff419bd56f62fafb0d00b516a440ee2714 bridge podman1 2024-05-23 15:38:22.45350875 +0000 UTC [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] [] false false true [] map[] map[] map[driver:host-local]}
DEBU[0002] Successfully loaded 2 networks
DEBU[0002] Allocated lock 0 for container b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575
DEBU[0002] exporting opaque data as blob "sha256:1d34ffeaf190be23d3de5a8de0a436676b758f48f835c3a2d4768b798c15a7f1"
DEBU[0002] Created container "b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575"
DEBU[0002] Container "b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575" has work directory "/var/lib/containers/storage/overlay-containers/b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575/userdata"
DEBU[0002] Container "b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575" has run directory "/run/containers/storage/overlay-containers/b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575/userdata"
DEBU[0002] Handling terminal attach
INFO[0002] Received shutdown.Stop(), terminating! PID=29
DEBU[0002] Enabling signal proxying
DEBU[0002] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/K7SXMMZESE6ZSY262VMYIDLKNL,upperdir=/var/lib/containers/storage/overlay/acc59c8a563e425e0d9fc5f1cf55220855ca5cc34728ae73bd4810cab5b2d01c/diff,workdir=/var/lib/containers/storage/overlay/acc59c8a563e425e0d9fc5f1cf55220855ca5cc34728ae73bd4810cab5b2d01c/work,nodev,fsync=0
DEBU[0002] Made network namespace at /run/user/0/netns/netns-b8d9b493-1423-df26-84d6-9f7d8c601986 for container b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575
DEBU[0002] Mounted container "b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575" at "/var/lib/containers/storage/overlay/acc59c8a563e425e0d9fc5f1cf55220855ca5cc34728ae73bd4810cab5b2d01c/merged"
DEBU[0002] Created root filesystem for container b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575 at /var/lib/containers/storage/overlay/acc59c8a563e425e0d9fc5f1cf55220855ca5cc34728ae73bd4810cab5b2d01c/merged
DEBU[0002] Creating rootless network namespace at "/run/containers/storage/networks/rootless-netns/rootless-netns"
DEBU[0002] pasta arguments: --config-net --pid /run/containers/storage/networks/rootless-netns/rootless-netns-conn.pid --dns-forward 169.254.0.1 -t none -u none -T none -U none --no-map-gw --quiet --netns /run/containers/storage/networks/rootless-netns/rootless-netns
DEBU[0002] The path of /etc/resolv.conf in the mount ns is "/etc/resolv.conf"
[DEBUG netavark::network::validation] "Validating network namespace..."
[DEBUG netavark::commands::setup] "Setting up..."
[INFO netavark::firewall] Using iptables firewall driver
[DEBUG netavark::network::bridge] Setup network a
[DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.2/24]
[DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24]
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv4/conf/eth0/arp_notify to 1
[INFO netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100)
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-1F40FC92DA241 created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_2 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_3 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_INPUT created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD created on table filter
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -j ACCEPT created on table nat and chain NETAVARK-1F40FC92DA241
[DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE created on table nat and chain NETAVARK-1F40FC92DA241
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j NETAVARK-1F40FC92DA241 created on table nat and chain POSTROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -p udp -s 10.89.0.0/24 --dport 53 -j ACCEPT created on table filter and chain NETAVARK_INPUT
[DEBUG netavark::firewall::varktables::helpers] rule -m conntrack --ctstate INVALID -j DROP created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT created on table nat
[DEBUG netavark::firewall::varktables::helpers] rule -j MARK --set-xmark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-SETMARK
[DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-MASQ
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain PREROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain OUTPUT
ERRO[0002] IPAM error: failed to open database /run/containers/storage/networks/ipam.db: open /run/containers/storage/networks/ipam.db: no such file or directory
ERRO[0002] Unmounting partially created network namespace for container b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575: failed to remove ns path: remove /run/user/0/netns/netns-b8d9b493-1423-df26-84d6-9f7d8c601986: device or resource busy
DEBU[0002] Unmounted container "b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575"
DEBU[0002] Network is already cleaned up, skipping...
DEBU[0002] Cleaning up container b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575
DEBU[0002] Network is already cleaned up, skipping...
DEBU[0002] Container b16be2b430aafd4b33d0235a4290a84afc22ac54dea4e8fcf3ca593e64766575 storage is already unmounted, skipping...
DEBU[0002] ExitCode msg: "netavark (exit code 1): io error: failed to create aardvark-dns directory: no such file or directory (os error 2)"
Error: netavark (exit code 1): IO error: failed to create aardvark-dns directory: No such file or directory (os error 2)
DEBU[0002] Shutting down engines |
Using an older version seems to work: $ podman run -it --rm --privileged quay.io/podman/stable:v4.9.4
...
[root@d80777d4e6fe /]# podman network create a
a
[root@d80777d4e6fe /]# podman run -it --net a alpine hostname
Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob d25f557d7f31 done |
Copying config 1d34ffeaf1 done |
Writing manifest to image destination
WARN[0002] Path "/run/secrets/etc-pki-entitlement" from "/etc/containers/mounts.conf" doesn't exist, skipping
WARN[0002] Path "/run/secrets/rhsm" from "/etc/containers/mounts.conf" doesn't exist, skipping
d80777d4e6fe |
can you unset I am not sure why this is set in our images by default as this is a internal detail and does not look correct to me at all. |
It works: $ podman run -it --rm --privileged --unsetenv _CONTAINERS_USERNS_CONFIGURED quay.io/podman/stable:v5.0.2
[root@2215684bcb22 /]# env | grep _CON
[root@2215684bcb22 /]# podman network create a
a
[root@2215684bcb22 /]# podman run -it --net a alpine hostname
Resolved "alpine" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/alpine:latest...
Getting image source signatures
Copying blob d25f557d7f31 done |
Copying config 1d34ffeaf1 done |
Writing manifest to image destination
2215684bcb22 |
Ack I will create a fix, looks we only only set it to an empty string and the interal code sets a value so I should ignore the empty value I guess |
For some unknonw reason the podman container image sets the _CONTAINERS_USERNS_CONFIGURED env to an empty value. I don't know what the purpose of this is but is will trigger the check here which is wrong when the contianer is privileged. To fix this check that the value is set to done like it is by the reexec logic. Also make sure the lock dir uses the same condition to stay consitent. Fixes containers/podman#22791 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
For some unknown reason the podman container image sets the _CONTAINERS_USERNS_CONFIGURED env to an empty value. I don't know what the purpose of this is but is will trigger the check here which is wrong when the container is privileged. To fix this check that the value is set to done like it is by the reexec logic. Also make sure the lock dir uses the same condition to stay consistent. Fixes containers/podman#22791 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
For some unknown reason the podman container image sets the _CONTAINERS_USERNS_CONFIGURED env to an empty value. I don't know what the purpose of this is but is will trigger the check here which is wrong when the container is privileged. To fix this check that the value is set to done like it is by the reexec logic. Also make sure the lock dir uses the same condition to stay consistent. Fixes containers/podman#22791 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
For some unknown reason the podman container image sets the _CONTAINERS_USERNS_CONFIGURED env to an empty value. I don't know what the purpose of this is but is will trigger the check here which is wrong when the container is privileged. To fix this check that the value is set to done like it is by the reexec logic. Also make sure the lock dir uses the same condition to stay consistent. Fixes containers/podman#22791 Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Issue Description
Hi,
I would like to run
kind
inside a podman container.I am able to reproduce the error by creating and using a network using
podman
from inside a podman container:Is this expected?
Thanks
Steps to reproduce the issue
Steps to reproduce the issue
podman run -it --rm --privileged quay.io/podman/stable:v5.0.2
podman network create a
podman run -it --net a alpine hostname
Describe the results you received
Describe the results you expected
Container is created and run
podman info output
The text was updated successfully, but these errors were encountered: