Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider increasing # of default workers? #126

Closed
withinboredom opened this issue Dec 18, 2022 · 5 comments · Fixed by #166
Closed

Consider increasing # of default workers? #126

withinboredom opened this issue Dec 18, 2022 · 5 comments · Fixed by #166

Comments

@withinboredom
Copy link
Collaborator

Currently, there is one worker spawned per core (if I'm understanding correctly).

This severely limits concurrent connections as it appears only one request per worker can be handled at a time.

Changing this to 2x dramatically affects the techempower benchmarks:

from:

        {
          "latencyAvg": "421.56ms", 
          "latencyMax": "3.65s", 
          "latencyStdev": "766.41ms", 
          "totalRequests": 74185, 
          "startTime": 1671386842, 
          "endTime": 1671386858
        }

to

        {
          "latencyAvg": "100.05ms", 
          "latencyMax": "1.01s", 
          "latencyStdev": "137.47ms", 
          "totalRequests": 112578, 
          "startTime": 1671387704, 
          "endTime": 1671387719
        }

Anything beyond 2x has diminishing returns in my testing. FWIW, 2x on my hardware is 32 workers.

@withinboredom
Copy link
Collaborator Author

Somehow I tab+enter'd that. Didn't mean to submit yet.

What do you think about:

  • Allow specifying 2x, 2.5x, etc in worker config to get a multiplier of workers.
  • Make the default 2x the number of cores by default.

@dunglas
Copy link
Owner

dunglas commented Dec 19, 2022

We should also update the documentation to recommend triggering the garbage collector of PHP after every request in worker mode. This should improve the performance.

@joanhey
Copy link

joanhey commented Dec 22, 2022

Please, add the patch to the benchmark.
And later, you will know the next step.

@joanhey
Copy link

joanhey commented Dec 22, 2022

When you want we talk!!
But more workers will be the worst, for a coroutines platform.
But the best, it's than you tried and we talk later.

@withinboredom
Copy link
Collaborator Author

withinboredom commented Dec 22, 2022

The default number of workers is highly specific to the physical architecture it is running on but there's no actual scheduling of each worker on a different core. The default is wholly arbitrary and something that kinda makes sense. Right now, it appears the default is too low.

I have no desire to update the benchmarks, and that isn't all I tested with to arrive at 2x cores. I did share the fact that it improves the benchmark... because I'm trying to improve performance... and that would improve the benchmarks.

The nice thing about these benchmarks is that they're nicely reproducible and thus, useful.

PS: I have a big day tomorrow and if it works out, I'll be around throughout the holidays to dig into some issues here. I've been seriously itching to write some code but life has been crazy these last few weeks.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants