Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: construct state sync parts using flat storage #8927

Merged
merged 32 commits into from
May 25, 2023

Conversation

Longarithm
Copy link
Member

@Longarithm Longarithm commented Apr 18, 2023

Gennerate inner part of state part using flat storage using idea present in #8984.

For reference - this makes state sync 100x faster, now SS for shard 3 should complete in 30 minutes :) https://near.zulipchat.com/#narrow/stream/297873-pagoda.2Fnode/topic/State.20Sync.20update.202023-05-23/near/360715229

In short, if flat storage head corresponds to the state root for which we sync state, it is enough to read only boundary nodes, and inner trie part can be reconstructed using range of KV pairs from state. The main logic for that is contained in Trie::get_trie_nodes_for_part_with_flat_storage.

It requires couple of minor changes:

  • now we allow creating "view" Tries with flat storage as well. As before, we want to avoid creating non-view Tries because TrieCache accesses may be blocking for chunk processing
  • get_head_hash and shard_uid methods for FlatStorage allowing to make correct range query to flat storage
  • FlatStateValue moved to primitives to allow more general access

TODO

Testing

https://nayduck.near.org/#/run/3023

Big sanity test get_trie_nodes_for_part_with_flat_storage covering all scenarios I could think of:

  • results with/without flat storage must match
  • result with incorrect flat storage must be an error
  • result with flat storage and missing intermediate node should be still okay

near-bulldozer bot pushed a commit that referenced this pull request May 19, 2023
Needed for #8927.

1. From what I see in https://doc.rust-lang.org/rust-by-example/scope/lifetime/explicit.html, lifetimes introduce restriction over passed variables - they must outlive function lifetime. But this doesn't make sense for iterator. It doesn't save much memory, because iterator is expected to return lots of vectors anyway. And slices `from` and `to` are converted to vectors inside anyway. This allows us to construct DB keys inside `iter_flat_state_entries`.
2. Construction of DB keys is hidden inside `flat` module now. You give state keys and receive state keys back, without knowing how they are stored on the lower level.

## Testing

@jbajic would you like to cover `iter_flat_state_entries` with some test? 
This function will be covered by integration tests for state parts anyway and it is not used much currently.
But it seems nice to have a test which creates flat storage for, say, 3 shards, writes some KV pairs to it and then checks that flat state iterator gives us correct data.
@Longarithm Longarithm self-assigned this May 23, 2023
@Longarithm Longarithm added the A-storage Area: storage and databases label May 23, 2023
@Longarithm Longarithm linked an issue May 23, 2023 that may be closed by this pull request
@Longarithm Longarithm marked this pull request as ready for review May 23, 2023 17:15
@Longarithm Longarithm requested a review from a team as a code owner May 23, 2023 17:15
@Longarithm Longarithm changed the title draft: state sync from flat storage draft: construct state sync parts using flat storage May 23, 2023
@Longarithm Longarithm changed the title draft: construct state sync parts using flat storage feat: construct state sync parts using flat storage May 23, 2023
Copy link
Contributor

@pugachAG pugachAG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overall LGTM!

match flat_storages.get(&shard_uid) {
Some(flat_storage) => flat_storage.clone(),
None => {
debug!(target: "chain", "FlatStorage is not ready");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I guess "store" target would be more appropriate than "chain" here

shard_layout: &ShardLayout,
shard_id: u64,
) -> Result<bool, StorageError> {
pub fn key_belongs_to_shard(key: &Box<[u8]>, shard_uid: &ShardUId) -> Result<bool, StorageError> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's fix key type since we are changing this code: key: &[u8]

self.0.flat_storage_manager.chunk_view(shard_uid, block_hash, is_view);
let flat_storage_chunk_view = block_hash
.map(|block_hash| self.0.flat_storage_manager.chunk_view(shard_uid, block_hash))
.flatten();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: use .and_then instead of .map(..).flatten()

};
let key_end = key_end.as_deref();

Ok(flat_storage_chunk_view.iter_flat_state_entries(key_begin, key_end))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: use iter_flat_state_entries(key_begin.as_deref(), key_end.as_deref()) instead of redefining variables, that looks a bit weird

core/store/src/trie/state_parts.rs Show resolved Hide resolved
let in_memory_created_nodes =
trie_values.iter().filter(|entry| !disk_read_hashes.contains(&hash(*entry))).count();
let trie_creation_duration = trie_creation_start.elapsed();
tracing::info!(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest to consider avoiding manually formatting string for logging in general and use structured logging instead: %values_ref, ..., ?trie_creation_duration, .... This s a good practice because it still results in very readable logs, but avoids unnecessary string formatting overhead when log level is not enabled. Also I find that to be a bit easier to grep in the logs. Not a strong opinion here, just something to consider.

) -> Result<impl Iterator<Item = (Vec<u8>, Box<[u8]>)> + 'a, StorageError> {
let flat_storage_chunk_view = match &self.flat_storage_chunk_view {
None => {
return Err(StorageError::StorageInconsistentState(format!(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we don't need format! here

@near-bulldozer near-bulldozer bot merged commit ff35a34 into near:master May 25, 2023
nikurt pushed a commit that referenced this pull request May 31, 2023
Gennerate inner part of state part using flat storage using idea present in #8984.

In short, if flat storage head corresponds to the state root for which we sync state, it is enough to read only boundary nodes, and inner trie part can be reconstructed using range of KV pairs from state. The main logic for that is contained in `Trie::get_trie_nodes_for_part_with_flat_storage`.

It requires couple of minor changes:
* now we allow creating "view" `Trie`s with flat storage as well. As before, we want to avoid creating non-view `Tries` because `TrieCache` accesses may be blocking for chunk processing
* `get_head_hash` and `shard_uid` methods for `FlatStorage` allowing to make correct range query to flat storage
* `FlatStateValue` moved to `primitives` to allow more general access

* prometheus metrics
* integration test checking that flat storage is used during normal block processing on client (or wait for #9090)

https://nayduck.near.org/#/run/3023

Big sanity test `get_trie_nodes_for_part_with_flat_storage` covering all scenarios I could think of:
* results with/without flat storage must match
* result with incorrect flat storage must be an error
* result with flat storage and missing intermediate node should be still okay
nikurt pushed a commit to nikurt/nearcore that referenced this pull request Jun 8, 2023
Gennerate inner part of state part using flat storage using idea present in near#8984.

In short, if flat storage head corresponds to the state root for which we sync state, it is enough to read only boundary nodes, and inner trie part can be reconstructed using range of KV pairs from state. The main logic for that is contained in `Trie::get_trie_nodes_for_part_with_flat_storage`.

It requires couple of minor changes:
* now we allow creating "view" `Trie`s with flat storage as well. As before, we want to avoid creating non-view `Tries` because `TrieCache` accesses may be blocking for chunk processing
* `get_head_hash` and `shard_uid` methods for `FlatStorage` allowing to make correct range query to flat storage
* `FlatStateValue` moved to `primitives` to allow more general access

* prometheus metrics
* integration test checking that flat storage is used during normal block processing on client (or wait for near#9090)

https://nayduck.near.org/#/run/3023

Big sanity test `get_trie_nodes_for_part_with_flat_storage` covering all scenarios I could think of:
* results with/without flat storage must match
* result with incorrect flat storage must be an error
* result with flat storage and missing intermediate node should be still okay
nikurt pushed a commit to nikurt/nearcore that referenced this pull request Jun 8, 2023
Gennerate inner part of state part using flat storage using idea present in near#8984.

In short, if flat storage head corresponds to the state root for which we sync state, it is enough to read only boundary nodes, and inner trie part can be reconstructed using range of KV pairs from state. The main logic for that is contained in `Trie::get_trie_nodes_for_part_with_flat_storage`.

It requires couple of minor changes:
* now we allow creating "view" `Trie`s with flat storage as well. As before, we want to avoid creating non-view `Tries` because `TrieCache` accesses may be blocking for chunk processing
* `get_head_hash` and `shard_uid` methods for `FlatStorage` allowing to make correct range query to flat storage
* `FlatStateValue` moved to `primitives` to allow more general access

* prometheus metrics
* integration test checking that flat storage is used during normal block processing on client (or wait for near#9090)

https://nayduck.near.org/#/run/3023

Big sanity test `get_trie_nodes_for_part_with_flat_storage` covering all scenarios I could think of:
* results with/without flat storage must match
* result with incorrect flat storage must be an error
* result with flat storage and missing intermediate node should be still okay
nikurt pushed a commit that referenced this pull request Jun 13, 2023
Gennerate inner part of state part using flat storage using idea present in #8984.

In short, if flat storage head corresponds to the state root for which we sync state, it is enough to read only boundary nodes, and inner trie part can be reconstructed using range of KV pairs from state. The main logic for that is contained in `Trie::get_trie_nodes_for_part_with_flat_storage`.

It requires couple of minor changes:
* now we allow creating "view" `Trie`s with flat storage as well. As before, we want to avoid creating non-view `Tries` because `TrieCache` accesses may be blocking for chunk processing
* `get_head_hash` and `shard_uid` methods for `FlatStorage` allowing to make correct range query to flat storage
* `FlatStateValue` moved to `primitives` to allow more general access

* prometheus metrics
* integration test checking that flat storage is used during normal block processing on client (or wait for #9090)

https://nayduck.near.org/#/run/3023

Big sanity test `get_trie_nodes_for_part_with_flat_storage` covering all scenarios I could think of:
* results with/without flat storage must match
* result with incorrect flat storage must be an error
* result with flat storage and missing intermediate node should be still okay
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-storage Area: storage and databases
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Construct state sync parts using flat storage
4 participants