-
Notifications
You must be signed in to change notification settings - Fork 365
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify required RAM per make job in opam file #4339
Comments
It seems possible to add that to the sandbox scripts using |
My opinion is that this is the responsibility of the package build rules (which contain ¹ |
This is quite system specific and also wouldn't control job count upfront but would lead to aborted builds in cases resources are exceeded.
This would lead to a lot of duplication of likely complicated code. Also I don't see how this could work together with opam's package parallelism. Of cause it would be possible to implement a resource allocation server outside of opam, which is then used in the make of those build tasks which are more resource hungry.
Indeed, this would be ideal. As far as I can tell from my recent experiments on VMs with various memory sizes, at least for some build tasks memory is much more important than core count, but this might be more a problem for coqc than for ocamlc. |
This seems like more of a job for a jobserver protocol. The conventional 'bin packing' thing to do here is to assign a weight to a package which is the number of "slots" that it takes up. This way, the slots can be biased towards whatever the scarcest resource is -- you could use them in OCaml packages to represent CPUs, or in Coq packages to represent memory availability. We won't be tracking precise memory sandboxing constraints within opam packages I think. Too difficult to measure. But tracking relative weights is very maintainable. |
I don't think this is a good idea - one should have separate values for CPU and memory or simply assume that each make thread will take one CPU and only assign memory if if exceeds that of a typical ocaml build job. The reason is that this does not only depend on the package but also on the core count / RAM ratio of the machine at hand, which can vary widely these days - I would say between 128 MB/core and 32 GB/core are not rare. If core or memory is the limiting factor depends on this ratio. |
Actually it is quite easy, at least on Linux and macOS: /usr/bin/time -v (on Lunux) and /usr/bin/time -l (on Mac) do the job - as simple as time to measure just the time. Please note that plain time is usually a shell built in and won't have -v or -l options.
I would think it is the other way around. A relative measure requires to measure the resources of two build jobs in order to give something reasonable. And there will always be arguing about what the measure is. |
The proof assistant Coq also uses opam for package management. The memory consumption of the Coq compiler per make job can be several gigabytes. For Coq packages which fail to build with large core count but low memory even when build as a single opam package, it would help to be able to specify an amount of memory per make job, so that opam can reduce the make job count to somethign more suitable. These days it is not that rare to find machines with 16 HW threads, so that opam jobs=15, but only 4GB of RAM. If the opam package e.g. specifies 2GB/job, opam should restrict jobs to 2 on a 4GB machine and 4 on a 8GB machine. I am not sure if this should relate to free memory or total memory - both habe their advantages and disadvantages. I would more relate it to total RAM, since free memory can fluctuate and this might lead to strange effects. Maybe opam can give a warning to the user (and wait) if jobs would be limited by memory and the amount of free memory is below 75% of the total memory.
The idea is to set the ram/job variable only in packages where a single opam package build fails e.g. in tests with 4 GB / 15 jobs or users report issues.
Btw.: during a parallel build of several Coq packages with opam I have seen memory usage peaks (just of all coqc processes) of up to 13.5GB. With 16 GB this is at the edge (swaps a bit but not that much), with 32 GB it works fine and with 8 GB it fails to build. I already choose between sequential and parallel build of opam packages depending on memory size in my build scripts.
Related issue: #4291
The text was updated successfully, but these errors were encountered: