Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[question] Launch docker container with hard memory limit higher than specified in task resources #2093

Closed
drscre opened this issue Dec 13, 2016 · 16 comments · Fixed by #8087

Comments

@drscre
Copy link

drscre commented Dec 13, 2016

We are using Nomad do deploy microservices wrapped in docker containers.

But memory consumption of microservices is non-uniform.
For example, a microservice can consume on average, say, 50mb. But there is a heavy endpoint which is rarely called and consumes, say, 100mb.

We have to specify memory limit based on worst case. Most of the time memory is not fully utilized and we have to pay for more aws instances than we actually need.

Is there any way to tell Nomad "when planning microservice allocation assume 50mb usage, but actually launch container with 100mb limit / without memory limi "?

@jippi
Copy link
Contributor

jippi commented Dec 13, 2016

Seem like a duplicate of #2082 :)

@drscre
Copy link
Author

drscre commented Dec 13, 2016

@jippi If i understand correctly, #2082 is about not using resource limits for tasks at all. And, therefore, it is not clear how to allocate them.

But I suggest that memory limit used for allocation planning and actual hard memory limit on docker container should not necessary be the same values.
Hard limit can be higher to allow for occasional spikes of memory usage.

Plan for average consumption, launch with the worst case limit

@jippi
Copy link
Contributor

jippi commented Dec 13, 2016

i think the underlying conclusion is the same.. if you don't binpack for worst-case, things can go south if all containers suddenly decide to max out at the same time.. then you either OOM entirely, or swap so bad you might as well be offline :)

@drscre
Copy link
Author

drscre commented Dec 13, 2016

Yeah, they can go south. But it's always a compromise.

Our application is not a bank. We can afford that low-probability risk with the benefit of better utilizing the resources and paying less for them.

Why being so restrictive?

@jippi
Copy link
Contributor

jippi commented Dec 13, 2016

I'm not core or hashicorp employed, so can't speak on their behalf - but personally i would prefer nomad to not allow me to step on my own toes at 3am because of something I did bad 3 months ago in a job spec - or one of my colleagues / rouge developer decided to do :)

@drscre drscre changed the title [question] Launch docker container without/with soft/with different memory limit [question] Launch docker container with hard memory limit higher than specified in task resources Dec 13, 2016
@dadgar
Copy link
Contributor

dadgar commented Dec 13, 2016

Hey @drscre,

Nomad needs to know the peak usage so it can properly bin-pack. In the future Nomad will support over-subscription so that even though it has reserved that space on the node for your services it could re-use that space for other lower quality of services jobs like low priority batch jobs. For now Nomad does not have this.

Thanks,
Alex

@dadgar dadgar closed this as completed Dec 13, 2016
@drscre
Copy link
Author

drscre commented Dec 16, 2016

For those who don't mind building Nomad from sources there is a trivial patch for Nomad 0.5.1

It adds "memory_mb" docker driver option which, if set to non-zero, overrides memory limit specified in task resources.

https://gist.github.com/drscre/4b40668bb96081763f079085617e6056

@mehertz
Copy link

mehertz commented May 31, 2018

This is a complete killer for us. We have ~7 containers that we are trying to set up as separate tasks inside a group. Unfortunately, these containers are peaky in terms of memory usage. This means that either we:

  1. Grant these containers enough memory such that Nomad refuses to schedule the containers on the very box on which they are currently running

or

  1. Reduce the memory resources such that all the containers get OOM killed after a while.

I'd like the emphasize that we are currently running these exact containers outside of Nomad without issue.

As far as I'm concerned the resources denoted in the resources stanza should be used for bin packing - and not used as hard limits by the underlying driver. Apart from anything it's inconsistent as I don't think the raw fork/exec driver imposes hard resources limits like the Docker driver does.

@slobo
Copy link
Contributor

slobo commented Jun 5, 2018

CPU resource limits are soft and can be exceeded - process gets throttled appropriately if there is too much contention. Memory should be handled similarly, no? I.e. we can have a limit for bin packing and another hard limit to protect form memory leaks etc.

Docker supports both --memory (hard) and --memory-reservation (soft), why not let us specify both in job spec?

@CumpsD
Copy link

CumpsD commented Oct 10, 2018

Looking forward to this as well, I'd like to decide when to shoot myself in the foot :)

@yishan-lin
Copy link
Contributor

Coming soon in a 0.11.X release - stay tuned.

@yishan-lin yishan-lin reopened this Apr 13, 2020
@vrenjith
Copy link
Contributor

Any update, friends? @dadgar ?

shoenig added a commit that referenced this issue Jun 1, 2020
Fixes #2093

Enable configuring `memory_hard_limit` in the docker config stanza for tasks.
If set, this field will be passed to the container runtime as `--memory`, and
the `memory` configuration from the task resource configuration will be passed
as `--memory_reservation`, creating hard and soft memory limits for tasks using
the docker task driver.
@shoenig shoenig added this to the 0.11.3 milestone Jun 1, 2020
@winstonhenke
Copy link

winstonhenke commented Jul 23, 2020

This solution works for Linux containers but Windows does not support MemoryReservation so the memory_hard_limit option doesn't work(I don't think it matters but I'm running them with Hyper-V isolation).

From what I can tell when using Windows containers the resources.memory config becomes both the minimum and the maximum memory requirement

  • It's the minimum required before Nomad will start the task
  • And it ends up setting the HostConfig.Memory for the container making it the maximum as well

I've tried passing in the --memory= arg as well but this seems to have no affect on the containers memory limits, it is still capped by the value specified by resources.memory

  • I can see this by looking at the HostConfig.Memory on a docker inspect

It would be nice if these two things were separated. Docker does not prevent me from starting 6 containers with --memory=6g on a host that only has 16 GB of RAM. My use case is that I am using Nomad to orchestrate containerized build agents and these build agents sit idle 90% of the time using very little memory. When they do pick up a job there is only one part that requires the high memory spike so I want them to use all the memory they can get their hands on(maybe this will cause issues for me idk but I can't even test this atm given how Nomad restricts this)

Edit: I spoke too soon and didn't test this enough before posting, the resources.memory config doesn't seem to require that amount of memory to be free, just installed which makes this much less of an issue for me

@shoenig
Copy link
Member

shoenig commented Jul 23, 2020

@winstonhenke Reading through https://docs.docker.com/config/containers/resource_constraints/ I get the impression these config options are Linux specific on the Docker side. I think it's because Windows (and other OSs) don't have an analogous equivalent to Linux's support for Soft limits as described in https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt. We could be totally wrong about that - if you do think it's possible, definitely open a fresh ticket pointing us at the right documentation. Thanks!

@winstonhenke
Copy link

@shoenig I updated my comment, I think I spoke too soon and was misunderstanding the resources.memory config. It verifies the amount of installed memory and not the amount of memory available like I originally thought. Thanks for the fast response.

@github-actions
Copy link

github-actions bot commented Nov 4, 2022

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 4, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.