-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(cf): CF forkjoin threading improvements #5071
Conversation
…-threads � Conflicts: � clouddriver-core/src/main/groovy/com/netflix/spinnaker/clouddriver/jobs/local/ForceDestroyWatchdog.java
LGTM -- One note: you will need to create a PR in halyard and update the credentials validator once this PR is merged/versioned. Anytime we touch the credentials in CD, halyard will need the accompanying change |
@fieldju this is a candidate to be backported to version This is considered a performance regression for cloudfoundry provider in Instead of assuming this is caused by the introduction of kork-credentials change and revert that change, this PR already fixes the root cause of the issue by defining a singleton ForkJoinPool for all cloudfoundry accounts. The implication of backporting this PR will cause the cloudfoundry setting There is not an easy way to create test cases for performance regressions like this, but I did a manual test with the same configuration reported in the bug and I only see the expected amount of threads controlled by the new setting |
@Mergifyio backport release-1.23.x |
Command
|
* feat(kubernetes): Send SIGKILL to kubectl * fix(cf): CF forkjoin threading improvements (cherry picked from commit 1f7d7b7)
CF provider creates a
ForkJoinPool
for each accounts defined, with a default max parallelism of 16.This causes that in case of a large amount of accounts the number of threads increases significantly, for example having 50 accounts means 50
Applications
FJP + 50Routes
FJP = 100 x 16 = 1600 threads.This causes a lot of unesessary overhead because the only threads actually being used at any given moment are the ones used by running caching agents, which means that most of these threads are not used all the time.
The given change moves the
ForkjoinPool
to be a singleton bean at the provider level, so there's only one pool for all accounts, with a configurable max parallelism. In this way the full capacity of the pool can be used regardless of which agents are running at any given moment.