Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MaxDegreeOfParallelism defaults for full pruning #5662

Merged
merged 11 commits into from
Jun 23, 2023

Conversation

MarekM25
Copy link
Contributor

@MarekM25 MarekM25 commented May 5, 2023

Changes

We aim to provide default options that meet the needs of most people. 95% of our users want to have full pruning that ideally doesn't affect block processing. The current default Pruning.FullPruningMaxDegreeOfParallelism = Environment.ProcessorCount isn't good as it will murder block processing and affects people's attestations. This PR doesn't suppose to be merged as it is but rather an open discussion about defaults.

We have the following options, all of which seem to be better than the current default:

  1. Pruning.FullPruningMaxDegreeOfParallelism = 2,
  2. Pruning.FullPruningMaxDegreeOfParallelism = 3
  3. Pruning.FullPruningMaxDegreeOfParallelism = 0 but under the hood, it translates to Environment.ProcessorCount / 2
  4. Pruning.FullPruningMaxDegreeOfParallelism = 0 but under the hood, it translates to Environment.ProcessorCount - 2
  5. Pruning.FullPruningMaxDegreeOfParallelism = 0 but under the hood, it translates to Environment.ProcessorCount - 1
  6. Anything else? other constant value than 2 or 3?

Of course, we will add correct safeguards for Math.Max or Math.Min depends on the option, for example for Pruning.FullPruningMaxDegreeOfParallelism = 3 it could be Math.Min(Environment.ProcessorCount, 3)

WDYT @asdacap @LukaszRozmej @kamilchodola @cbermudez97 ?

Types of changes

What types of changes does your code introduce?

  • Bugfix (a non-breaking change that fixes an issue)
  • New feature (a non-breaking change that adds functionality)
  • Breaking change (a change that causes existing functionality not to work as expected)
  • Optimization
  • Refactoring
  • Documentation update
  • Build-related changes
  • Other: discussion

Testing

Requires testing

  • Yes
  • No

If yes, did you write tests?

  • Yes
  • No

Notes on testing

Not now

@kamilchodola
Copy link
Contributor

Option 3 for me looks the best - but remember to take a ceiling value out of it. Better slightly more than slightly less.

@MarekM25
Copy link
Contributor Author

MarekM25 commented May 5, 2023

Option 3 for me looks the best - but remember to take a ceiling value out of it. Better slightly more than slightly less.

Yeah, I think about option 3 or 4. Yes as I described in the issues I am aware of the correct safeguards :)

@asdacap
Copy link
Contributor

asdacap commented May 5, 2023

I don't mind either way. Option 3 is good, but probably not enough. Maybe / 4?. Keep in mind #5588 really throttle it down, by about 2x. #5645 also reduces iops even further. Probably just set to 2 by default, and another option "FullPruningNoThrottle" or something?

@kamilchodola
Copy link
Contributor

Just remember about one thing - if we will change defaults and someone decide to use the default, then they will experience quite enormous degradation of pruning time as soon as we will do that. From UX point of view changing for some users from current (maybe) 16 cores down to 3 may be a horrible experience (even if they will have better avg of block processing - ling term it will be the same outcome because it will be slightly worse for longer period of time vs significantly worse but for shorter period).

@@ -37,7 +37,7 @@ public interface IPruningConfig : IConfig
FullPruningTrigger FullPruningTrigger { get; set; }

[ConfigItem(
Description = "'Full' pruning: Defines how many parallel tasks and potentially used threads can be created by full pruning. 0 - number of logical processors, 1 - full pruning will run on single thread. " +
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@LukaszRozmej
Copy link
Member

I like the 50% option.

@benaadams
Copy link
Member

I like the 50% option.

50% of real cores rather than HT cores?

@MarekM25 MarekM25 marked this pull request as ready for review June 23, 2023 08:10
@MarekM25 MarekM25 merged commit 3f8de70 into master Jun 23, 2023
@MarekM25 MarekM25 deleted the max_degree_full_pruning branch June 23, 2023 08:29
kamilchodola pushed a commit that referenced this pull request Jun 23, 2023
* other defaults for full pruning?

* degreeOfParalleism

* small refactor

* fix build

* Fixing BatchedTrieVistior

* 25% of cores

* Update Pruning config

* add logger

* fix
kamilchodola added a commit that referenced this pull request Aug 1, 2023
* Add missing peer types to limits (#5838)

* Add missing peer types to limits

* Add Reth

* Flaky test

* Use better processing timer for logging (#5843)

* Use different processing timer

* Use single queue

* Revert to simpler

* Add Gnosis Shanghai hard-fork timestamp (#5848)

* Don't use DarkGray (#5849)

* Bump to 1.20.0-rc

* MaxDegreeOfParallelism defaults for full pruning (#5662)

* other defaults for full pruning?

* degreeOfParalleism

* small refactor

* fix build

* Fixing BatchedTrieVistior

* 25% of cores

* Update Pruning config

* add logger

* fix

* Perf/smoother peer discovery (#5846)

* Continuously connect

* Added another delay due to disconnect

* Increase useless peer timeout

* Simple rate limiter

* Integrate with peer manager

* Adjust some stats

* Check all peer, and uses pending variable

* Make logic clearer

* Minor cleanup

* Missed cancellation token

* Cancel setup outgoing peer connection if throttled

* Lockless ratelimiter

* Addressing comment

* Minor adjustments

* Whitespace

* Having trouble getting candidate at higher speed

* Make test more consistent

* Even more lenient

(cherry picked from commit 6ab0a3a)

* Update RocksDB package (#5883)

(cherry picked from commit 4e2bf0f)

* Update DotNetty feed

(cherry picked from commit 3e330ad)

* Update DotNetty package

(cherry picked from commit 47a92ee)

* add more gnosis bootnodes (#5910)

(cherry picked from commit b7086a9)

* Remove `v` of non-legacy tx signature from RPC response (#5927)

* minor fixes

* Update version to 1.20.0

* Bump to 1.20.1

* Restore `v` in tx signature for Geth compatibility (#5937)

* Restore DotNetty feed (#5976)

* Revert `nuget.config`

* Update DotNetty package

---------

Co-authored-by: Ben Adams <thundercat@illyriad.co.uk>
Co-authored-by: Ruben Buniatyan <rubo@users.noreply.github.com>
Co-authored-by: Kamil Chodoła <kamil@nethermind.io>
Co-authored-by: Marek Moraczyński <marekm2504@gmail.com>
Co-authored-by: Amirul Ashraf <asdacap@gmail.com>
Co-authored-by: Marcin Sobczak <marcindsobczak@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants