-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use as many goroutines for ForgetInode op #79
base: master
Are you sure you want to change the base?
Conversation
hi @stapelberg do you have some advise? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you done benchmarks for this? Can you share the results?
I’m asking because spawning goroutines is actually not necessarily beneficial. In fact, when I did my last measurements, it was more performant to never spawn a goroutine.
fuseutil/file_system.go
Outdated
@@ -21,6 +21,7 @@ import ( | |||
|
|||
"github.com/jacobsa/fuse" | |||
"github.com/jacobsa/fuse/fuseops" | |||
"github.com/panjf2000/ants/v2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not keen on adding a new dependency just for this change. If we decide to go forward with this change, we should implement it without extra dependencies. Check out sourcegraph/gophercon-2018-liveblog#35 for inspiration
fuseutil/file_system.go
Outdated
@@ -93,6 +94,12 @@ type fileSystemServer struct { | |||
opsInFlight sync.WaitGroup | |||
} | |||
|
|||
type opCtx struct { | |||
c *fuse.Connection | |||
ctx context.Context |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Embedding context.Context in structs is an anti-pattern: https://groups.google.com/forum/#!topic/golang-nuts/xRbzq8yzKWI
I don't think 100000 is the right number. Either we allocate all of them at startup which wastes 200MB for most cases, or we don't allocate them at startup in which case you have OOM during a storm of ForgetInode. Using a very small pool maybe ok, note that implementation may have to lock anyway so having that many forget goroutines running isn't useful |
hi, @kahing the 10w is the capacity, will not allocate until used, and if the goroutine idle too long time, it will be recycled! |
7dd5564
to
8fc09b8
Compare
you can see from the follow result, 4 tasks,1000files, the "File removal" performance up from 22 to 5175!
After modify
|
in jacobsa#30 forgetinode was changed into inline ServerOps, this may solove memory oom, but the performance of rm op will be very slow, and it will also hang other ops, so i think limit the max num of forgetinode goroutines, this can avoid oom but not affect performance
8fc09b8
to
4b07f0d
Compare
Can you share the results of that same benchmark, but with a goroutine pool of only 2 goroutines please? |
2 goroutines
100 goroutines
|
What we're seeing here is that the kernel will parallelize dispatch of operations that have no logical dependencies on each other. The test is looking for throughput of removals, while @stapelberg's optimization was looking for latency of A better solution may be to keep a goroutine pool for all operations instead of spinning them up as requests come in, and then spinning up fresh goroutines only under high load if all pool goroutines are busy. |
in #30 forgetinode was changed into inline ServerOps, this may solove
memory oom, but the performance of rm op will be very slow, and it will
also hang other ops, so i think add a goroutine pool to limit the max
num of forgetinode goroutines, and it will not affect the performance