Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set minimum memory limit to 6M, to account for higher startup memory use #41168

Merged
merged 1 commit into from
Jul 9, 2020

Conversation

thaJeztah
Copy link
Member

For some time, we defined a minimum limit for --memory limits to account for
overhead during startup, and to supply a reasonable functional container.

Changes in the runtime (runc) introduced a higher memory footprint during container
startup, which now lead to obscure error-messages that are unfriendly for users:

run --rm --memory=4m alpine echo success
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:415: setting cgroup config for procHooks process caused \\\"failed to write \\\\\\\"4194304\\\\\\\" to \\\\\\\"/sys/fs/cgroup/memory/docker/1254c8d63f85442e599b17dff895f4543c897755ee3bd9b56d5d3d17724b38d7/memory.limit_in_bytes\\\\\\\": write /sys/fs/cgroup/memory/docker/1254c8d63f85442e599b17dff895f4543c897755ee3bd9b56d5d3d17724b38d7/memory.limit_in_bytes: device or resource busy\\\"\"": unknown.
ERRO[0000] error waiting for container: context canceled

Containers that fail to start because of this limit, will not be marked as OOMKilled,
which makes it harder for users to find the cause of the failure.

Note that after this memory is only required during startup of the container. After
the container was started, the container may not consume this memory, and limits
could (manually) be lowered, for example, an alpine container running only a shell
can run with 512k of memory;

echo 524288  > /sys/fs/cgroup/memory/docker/acdd326419f0898be63b0463cfc81cd17fb34d2dae6f8aa3768ee6a075ca5c86/memory.limit_in_bytes

However, restarting the container will reset that manual limit to the container's
configuration. While docker container update would allow for the updated limit to
be persisted, (re)starting the container after updating produces the same error message
again, so we cannot use different limits for docker run / docker create and docker update.

This patch raises the minimum memory limnit to 6M, so that a better error-message is
produced if a user tries to create a container with a memory-limit that is too low:

docker create --memory=4m alpine echo success
docker: Error response from daemon: Minimum memory limit allowed is 6MB.

- Description for the changelog

* Raise minimum memory limit to 6M, to account for higher memory use by runtimes during container startup

- A picture of a cute animal (not mandatory but encouraged)

@thaJeztah
Copy link
Member Author

@kolyshkin @AkihiroSuda PTAL

Also wondering if we need a detection in docker container update, as apparently setting the limit to a value lower than the current use of the container is rejected by the kernel;

# container uses roughly 460KiB after starting
docker run -dit --name mycontainer --memory=8m alpine
acdd326419f0898be63b0463cfc81cd17fb34d2dae6f8aa3768ee6a075ca5c86

# setting limit to 256KiB is refused by the kernel
echo 262144 > /sys/fs/cgroup/memory/docker/acdd326419f0898be63b0463cfc81cd17fb34d2dae6f8aa3768ee6a075ca5c86/memory.limit_in_bytes
-bash: echo: write error: Device or resource busy

cat /sys/fs/cgroup/memory/docker/acdd326419f0898be63b0463cfc81cd17fb34d2dae6f8aa3768ee6a075ca5c86/memory.limit_in_bytes
524288

@AkihiroSuda
Copy link
Member

Changes in the runtime (runc) introduced a higher memory footprint during container startup

Is there any specific commit that introduced the hog, or did it increased by nature due to the increase of LOC?

@thaJeztah
Copy link
Member Author

The original raise in memory consumption was due to opencontainers/runc@0a8e411, which (I think) was later reduced by opencontainers/runc#1984.

Perhaps we need a git-bisect and build runc from various commits to see if there's specific changes that contributed. Also not sure if it's only runc that accounts for this, or also the containerd shims? (perhaps you know?)

For some time, we defined a minimum limit for `--memory` limits to account for
overhead during startup, and to supply a reasonable functional container.

Changes in the runtime (runc) introduced a higher memory footprint during container
startup, which now lead to obscure error-messages that are unfriendly for users:

    run --rm --memory=4m alpine echo success
    docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:415: setting cgroup config for procHooks process caused \\\"failed to write \\\\\\\"4194304\\\\\\\" to \\\\\\\"/sys/fs/cgroup/memory/docker/1254c8d63f85442e599b17dff895f4543c897755ee3bd9b56d5d3d17724b38d7/memory.limit_in_bytes\\\\\\\": write /sys/fs/cgroup/memory/docker/1254c8d63f85442e599b17dff895f4543c897755ee3bd9b56d5d3d17724b38d7/memory.limit_in_bytes: device or resource busy\\\"\"": unknown.
    ERRO[0000] error waiting for container: context canceled

Containers that fail to start because of this limit, will not be marked as OOMKilled,
which makes it harder for users to find the cause of the failure.

Note that _after_ this memory is only required during startup of the container. After
the container was started, the container may not consume this memory, and limits
could (manually) be lowered, for example, an alpine container running only a shell
can run with 512k of memory;

    echo 524288  > /sys/fs/cgroup/memory/docker/acdd326419f0898be63b0463cfc81cd17fb34d2dae6f8aa3768ee6a075ca5c86/memory.limit_in_bytes

However, restarting the container will reset that manual limit to the container's
configuration. While `docker container update` would allow for the updated limit to
be persisted, (re)starting the container after updating produces the same error message
again, so we cannot use different limits for `docker run` / `docker create` and `docker update`.

This patch raises the minimum memory limnit to 6M, so that a better error-message is
produced if a user tries to create a container with a memory-limit that is too low:

    docker create --memory=4m alpine echo success
    docker: Error response from daemon: Minimum memory limit allowed is 6MB.

Possibly, this constraint could be handled by runc, so that different runtimes
could set a best-matching limit (other runtimes may require less overhead).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Copy link
Member

@cpuguy83 cpuguy83 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@cpuguy83 cpuguy83 merged commit dd46bbc into moby:master Jul 9, 2020
@thaJeztah thaJeztah deleted the raise_minimum_memory_limit branch July 9, 2020 19:18
@thaJeztah thaJeztah added this to the 20.03.0 milestone Jul 9, 2020
jnc74743 added a commit to jnc74743/ral-htcondor-tools that referenced this pull request Jul 9, 2021
…called "containerd" changing from "docker-containerd"

Bump memory requirement for test container. moby/moby#41168
jnc74743 added a commit to jnc74743/ral-htcondor-tools that referenced this pull request Jul 9, 2021
…called "containerd" changing from "docker-containerd"

Bump memory requirement for test container. moby/moby#41168
jnc74743 added a commit to jnc74743/ral-htcondor-tools that referenced this pull request Sep 7, 2021
…called "containerd" changing from "docker-containerd"

Bump memory requirement for test container. moby/moby#41168
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

low memory on start causing strange failure condition
4 participants