You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As discussed it does not seem to be feasible to perform the migration for an archival node.
The main issue here is that in order to migrate a record stored with key shard_uid node_hash to a record stored with key account_id node_hash we would need to know what the account id is. This information is not stored within the value for most (all?) value types and so it's impossible to easily get it by just iterating the state column in rocksdb.
The only way that I can think of to find out the account id is to iterate the trie starting from a root and migrate all nodes in the trie. For an RPC node it may still be possible by migrating all of the nodes at the current head by using flat storage and then waiting for 5-6 epochs to let garbage collection removed the unmigrated data.
I believe this approach would not work for an archival node unfortunately and I can't think of any good way to go about it.
One potential alternative could be to maintain the historical data in the old format. In this approach we keep old data with old keys and newer data with new keys. We would need to either duplicate a lot of the data or be prepared to fallback to the old key format thus incurring a 2x slowdown.
I'm very much open to any new ideas but as far as I see it, it doesn't seem a possible solution right now.
It's worth mentioning that the archival nodes may be replaced by read-rpc in the future but it seems unlikely to happen before we would need it to implement resharding.
The text was updated successfully, but these errors were encountered: