-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide compute resource guidance for OPA deployments #1601
Comments
Hey, any update about this? I'm also experiencing high CPU usage and I wonder if it's normal. Such documentation could help me understand it better. |
@omerlh if you can provide some more detail? Like the original comment says, it's going to depend on a bunch of variables. There is no single correct answer. |
Is OPA capable of leveraging multiple CPUs? Looks like |
@jin09 by default, the OPA server ( |
@tsandall pardon me for gaps in my understanding. I come from python background and have never worked on go. So when I spin the server ( Thanks |
Oh, I see. The wiki article for Go explains the concurrency model a bit
(it's based on what's called CSP--communicating sequential processes):
https://en.wikipedia.org/wiki/Go_(programming_language)#Concurrency:_goroutines_and_channels
.
Do we use some prefork model, or is it something specific to golang?
As far as I know, the Go runtime creates OS threads for your goroutines to
run on (up to one per core) and then schedules your goroutines to execute
on them. Preemption happens when you block on synchronization primitives
like channels (but I think that's an implementation detail). I'm not an
expert so take this with a grain of salt.
Probably you can redirect me to some resources where I could learn about it
and get my concepts cleared.
I'd take a look at the Tour of Go if you want to familiarize yourself with
Go and just search for topics you're interested in.
Happy golang-ing!
…On Fri, Jan 10, 2020 at 1:09 PM Gautam Jain ***@***.***> wrote:
@tsandall <https://github.com/tsandall> pardon me for gaps in my
understanding. I come from python background and have never worked on go.
So when I spin the server (opa run -s) I see only a single process
running (ps -elf | grep opa) which is why I thought maybe opa is single
threaded. Do we use some prefork model, or is it something specific to
golang? Probably you can redirect me to some resources where I could learn
about it and get my concepts cleared.
Thanks
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1601?email_source=notifications&email_token=AAB2KJNOP6UN6OTBUKJVFQLQ5C2XJA5CNFSM4IIB7CAKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIUX7VY#issuecomment-573145047>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAB2KJMYU37YJJJKRJC2ZQTQ5C2XJANCNFSM4IIB7CAA>
.
--
-Torin
|
Just to chime in on the python concurrency model vs golang. Python is limited to a single core by the interpreter and its global lock. It doesn't actually run >1 thread in parallel, execution of threads can be interleaved but always on the same single CPU. That is why for Python applications to use >1 CPU you need to fork into multiple python processes (or put the python scripts behind something that does it for you like gunicorn or whatever). Essentially in python multi-threaded != multi-core, but multi-process can be multi-core. Hence why you are used to seeing >1 process if you check on the server. Golang, like c/c++/java/etc, uses real system threads which the OS schedules over any of the CPUs on the host. This means that multi-threaded in these languages does imply it can/will be distributed over multiple CPU's. There is a good post here https://stackoverflow.com/a/4496918/4789546 which gives more details |
There are many ways this question can be answered so this is not a final answer but instead a starting point. In the future we can expand on this section to include more use-case specific resource utilization guidance however for the time being this is a good start. Fixes open-policy-agent#1601 Signed-off-by: Torin Sandall <torinsandall@gmail.com>
There are many ways this question can be answered so this is not a final answer but instead a starting point. In the future we can expand on this section to include more use-case specific resource utilization guidance however for the time being this is a good start. Fixes #1601 Signed-off-by: Torin Sandall <torinsandall@gmail.com>
It's common for people to ask how many CPU and memory resources to give to OPA for specific use cases (e.g., admission control, microservice API authorization, etc.) The answer will depend on what the target latency should be (e.g., 1ms for microservice API authorization) and what kind of platform OPA is running on. It would be great if we could provide recommendations for typical platforms (e.g., Kubernetes on AWS) for common use cases.
The text was updated successfully, but these errors were encountered: