-
Notifications
You must be signed in to change notification settings - Fork 712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memoise & cache the result of renderers, so we don't recalculate views multiple times. #851
Conversation
Would passing around a |
Overall, a lot better (cleaner and less complicated), than I was expecting. I think this could work. Would probably be good to add a |
Roger. Could do report pointers, I wasn't keen on the id stuff myself. Will On Monday, 25 January 2016, Paul Bellamy notifications@github.com wrote:
|
A couple potential suggestions, but with a |
fc56c16
to
b1a3a15
Compare
After adding the PurgeCache method between benchmark runs, it becomes obvious this isn't much of a performance increase (613703 -> 582766 ns/op, so within the margin). On inspection, this is because there is actually very little work done twice by the host renderer, or any renderer. However this does make a huge difference to the calls to /api/topology (from 7197010 -> 1447692 ns/op, or 5x faster). As we hit this every second, this results in a small reduction in CPU usage for the scope container on my VM running the example app. Still too high. |
470860e
to
0e3e3b7
Compare
I disagree. 1-8 ms to generate all of the reports seems entirely reasonable. If you're saying that the CPU usage is too high, then I agree, but that is probably a different problem/solution. |
Actually, this should mean that we only have to run each renderer each time a probe report comes in, so we're at most running them 8ms/15seconds or 0.05% of the time. So, this should have a pretty strong impact on CPU usage, in spite of not showing it on the benchmarks. |
Yes thats what I meant. I see the app using mostly 3% CPU, occasionally spiking to 20%.
This won't happen as every http request creates a new report (with a new ID) so won't hit the cache - ie the cache is only used for things renderer multiple times in the same request. We could improve the collector such that the same report is used across requests? |
👍 |
Instead of caching the result (as the invalidation logic is complication wrt to timing them out everything the oldest report is older than 15s) I made the ID of the generated report a hash of the IDs of the input reports, which should have the same effect. CPU usage for the scope-app, using the example app, was normally 0%, then went from 3% to 12%. So I don't think it had the effect I wanted, but at this level its hard to measure (we need a better way of measure CPU usage - maybe from a 30s profile)? I'm going to try caching the report now, as that will save some merge cost, but I'm not convinced its worth it - once you have more that 3 probes, you'll have to generate a new report each time. |
Caching them also didn't make a huge different, but the app uses 0% CPU for a bunch of time, and occasionally spikes to 10ish. I think this is fine for the app now. |
Why not |
Memoise & cache the result of renderers, so we don't recalculate views multiple times.
Fixes #854
The idea is that to render hosts, you have to render processes, containers, pods etc. All of these in turn render processes. As the rendering pipeline is deterministic, ie rendering the same report will give you the same result, we can cache intermediate steps. I introduce a random report ID and use reflection to get the address of the renderer, and use these two as a key for the cache.
Results in an 750% increase in performance of the host renderer, although this test might not be fair...
Before:
After: