-
Notifications
You must be signed in to change notification settings - Fork 2.6k
rpc: Use the blocks pinning API for chainHead methods #13233
Conversation
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
…s on drop Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
@@ -60,15 +60,15 @@ use sp_runtime::{ | |||
use std::{marker::PhantomData, sync::Arc}; | |||
|
|||
/// An API for chain head RPC calls. | |||
pub struct ChainHead<BE, Block: BlockT, Client> { | |||
pub struct ChainHead<BE: Backend<Block> + 'static, Block: BlockT, Client> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a nit; nowadays I'm leaning towards not putting any trait bounds on generic params in structs unless they are needed for the definition of the struct (though I've been back and forth on this myself), so I guess I'd lean towards removing the bounds if possible (but curious on what others think!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, I do like that approach. One nice thing about not having bounds on the structure is that we do not contaminate every structure containing that type.
For soundness reasons I presume, the compiler requires the bounds for the Drop
impl to also be present on the struct.
The SubscriptionHandle
implements Drop
that needs access to the unpin_block
method defined by Backend<Block>
trait.
And unfortunately, this then leaks into SubscriptionManagement
and into ChainHead
itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this ; TIL Drop
impls with trait bounds need matching ones on the struct :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good in general, just some small things that need to be improved!
@@ -94,13 +104,17 @@ impl<Block: BlockT> SubscriptionHandle<Block> { | |||
|
|||
if inner.blocks.len() == inner.max_pinned_blocks { | |||
// We have reached the limit. However, the block can be already inserted. | |||
|
|||
if inner.blocks.contains(&hash) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is a logic error? If we try to pin the same block twice, the first unpin
will unpin it in the backend (assuming it is only pinned by this subscription). Is this documented in the spec that when you receive the same block as best and some seconds later as finalized (without having called unpin in between), you only need to call unpin once?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch! I couldn't find something super specific in the RPC spec regarding this, tho I was expecting users to call unpin
only once (when they are no longer interested in using the block).
From the spec:
Note: A JSON-RPC client should call chainHead_unstable_unpin only after it is sure to no longer be interested in a certain block. This typically happens after the block has been finalized or pruned. There is no requirement to call chainHead_unstable_unpin as quickly as possible.
I was also thinking along the lines: the newBlock event is where the block is declared and pinned; and bestBlockChanged and finalized are updates about what happens with that block.
I think we could update the spec documentation for the unpin
method to state that only block hashes reported by the newBlock
event are supposed to be unpinned.
What do you think about it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was also thinking along the lines: the newBlock event is where the block is declared and pinned; and bestBlockChanged and finalized are updates about what happens with that block.
It sounds reasonable to me that:
- a block is pinned just prior to a client being given a
newBlock
event, - later we might see
best
/finalised
events emitted re that block (and no pinning happens at this point), - at any point after the
newBlock
, whenever a user would like to unpin that block hash they will call "unpin" once against the hash to unpin it.
But, if a newBlock
is unpinned quickly and then a best/finalized event is emitted for it.. should it need unpinning again? I think offhand I'd go with "it's been unpinned so it stays unpinned, and those events are just notifications about the block you've already declared you don't care about any more".
But that would make it annoying for people to ignore all but best/finalized blocks! Because the simplest logic to do so would be to call "unpin" on all "newBlock" events and not on any best/finalized block events.
"unpin" could potentially take both a block hash and a type (new/best/finalized) so that you can opt-out of all block events you don't care about but still keep pinned the ones you do?
Or, a best/finalized notificiation could pin the block again, and "unpin" just takes the hash, but that runs the risk of race conditions (I "unpin" a newBlock but by the time the unpin request hits the node ,a bestBlock for that hash is emitted).
So two options then are:
- Blocks are only pinned at "newBlock". Unpin will unpin, and best/finalized events may exist for blocks that have already been unpinned.
- new/best/finalized blocks are treated as independent. "unpin" takes a hash and type to unpin exactly one of those.
Maybe there's a simpler way around this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was also thinking along the lines: the newBlock event is where the block is declared and pinned; and bestBlockChanged and finalized are updates about what happens with that block.
I think we should pin on the first event we see (a client can only see bestBlock/finalized without newblock by having connected after the block was imported) and then unpin should be called when they are not interested anymore. We should update the spec to mention this behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, we could update the spec to reflect something along the lines:
"Any block hash reported by chainHead's events is guaranteed to be pinned in memory and it requires a single chainHead_unstable_unpin
call to unpin the block."
The spec states currently:
The current finalized block reported in the initialized event, and each subsequent block reported with a newBlock event, is automatically considered by the JSON-RPC server as pinned.
I think we should pin on the first event we see
Yep, that's along the lines of how we pin internally for this chainhead
subscription.
a client can only see bestBlock/finalized without newblock by having connected after the block was imported
Before using the subscriptions (best block and finalized) we are:
- generating finalized event (and pin the finalized block)
- generate
newBlock
event for each children of the finalized recursively (and pin the block)
I might be wrong, but I think its entirely possible that we can miss blocks (warp sync etc) and such we always pin
the block on the first event that we get from the subscriptions.
In case of errors or jumps, the chainHead should produce the Stop
events.
Let me know if that is correct, I ll take an action to create a PR to clarify a bit the spec, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another way would be to allow users to call unpin
only for hashes reported by the Finalized
event: paritytech/json-rpc-interface-spec#32. Let me know if this makes a bit more sense, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and then unpin should be called when they are not interested anymore.
What if they call unpin
after receiving eg a newBlock
, and then the server sends a finalized
with the same hash? I assume that the block is still unpinned even though the client has been told about it again.
If I was only interested in finalized blocks, and did not want to have any non-finalized blocks pinned any longer than necessary, what would I do?
I gather from chatting with Alex that that what I'd do is hold off on calling "unpin" on anything until I see a finalized
notification (which also lists all blocks that are "pruned"), and then I could use that list to unpin
all blocks that are listed as "pruned" now that I know they aren't important any more. (This relies on that list of pruned blocks really just being a list of all blocks not in the finalized chain, since if they are pinned they wouldn't be pruned). Does that make sense?
Another way would be to allow users to call unpin only for hashes reported by the Finalized event: paritytech/json-rpc-interface-spec#32. Let me know if this makes a bit more sense, thanks!
I think github is being slow; this message popped up after I wrote mine :) Also seems like a very valid approach so long as the finalized event def lists all of the blocks produced since the last finalized event that aren't on the finalized chain (so we know what we can safely unpin now). A nice thing about this is that the client is never sent a hash to a block that's already been unpinned (since they can't unpin until they won't hear any more about the block anyway!).
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Bastian Köcher <git@kchr.de>
The block limit is currently set to 4096 here: substrate/client/service/src/builder.rs Line 703 in 9015061
The limit for the pinned block cache in the backend however is 1024 (arbitrarily chosen):
We should adjust these to they match, preferably keeping the backend one larger than the RPC limit, since it needs to keep additional headroom for notification pinning. Also I would like to discuss the impact on node memory:
What do you guys think? |
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
The chainhead limit (4k blocks) is also arbitrarily chosen. From the initial "pinning in database" from #12497, we internally picked a limit of delaying the pruning by 32 blocks; considering the production of blocks at 6 seconds that would give users roughly 3 minutes to execute their use-case using the I like the idea of keeping in sync these values, maybe even a bit more restrictive for the chainHead. To allow fairness for multiple subscriptions, in the RPC layer:
When a user first connects, the
Ideally, we'd need to submit the I'm not quite sure what's the best solution to implement considering the alternatives. @bkchr what do you think? |
bot rebase |
Rebased |
This makes a lot of sense to me. Sounds like a good compromise between resource usage and user convenience. As long as users stay within their 64 (or whatever limit we choose), they are pretty safe. Outside of this, there is a chance of getting thrown out.
I am not sure what you mean by this? Why does it make a difference for us when a user calls unpin (as long as limits are met)?
There is no pinning for headers because we never prune them, I think we keep them around forever. |
Yep, indeed this doesn't make a difference. For context, I was referring to the paritytech/json-rpc-interface-spec#32 possible race, and since we could handle that inside substrate it shouldn't matter in the end :) |
bot rebase |
Rebased |
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
/// This implementation assumes: | ||
/// - most of the time subscriptions keep a few blocks of the chain's head pinned | ||
/// - iteration through the blocks happens only when the hard limit is exceeded. | ||
/// | ||
/// Considering the assumption, iterating (in the unlike case) the hashmap O(N) is | ||
/// more time efficient and code friendly than paying for: | ||
/// - extra space: an extra BTreeMap<Instant, Hash> to older hashes by oldest insertion | ||
/// - extra time: O(log(N)) for insert/remove/find each `pin` block time per subscriptions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we are relying solely on users to unpin blocks properly though if we want to uphold these assumptions?
Or will I, as a user, hit the "stop" condition of "too many pinned blocks", which would lead to the subscription closing, before this needs to worry about the hard limit being hit and iteration needed?
(basically I'm wondering if there's a case where users can cause lots of iteration over this if they forget somehting or are in some way malicious? Also wonder how big it can get; maybe that'll become clear as I review)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pinning is indeed in the user's hands, looking at the code again it seems that the iteration is triggered when we exceed the global limit of how many blocks we can pin (across all subscriptions).
When the limit is exceeded we'll iterate over all blocks of all subscriptions.
That can iterate over at most 512 blocks of each subscription (in the malicious case / in the case where all users forget to unpin)
pub fn contains_block(&self, hash: &Block::Hash) -> bool { | ||
let inner = self.inner.read(); | ||
inner.blocks.contains(hash) | ||
fn global_unpin_block(&mut self, hash: Block::Hash) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something I find a little hard to follow is that none of these fns are pub
etc for a bunch of these things, and so it's hard to know which are used by the "parent" interface and which are private and only called from other fns on that type.
I wonder whether moving some of this stuff into separate files/modules would help make the inner and outer interfaces super obvious so that it's easier to see what can and can't be called by the other types?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Just a couple of small comments/suggestions for clarity but nothing super important (I resolved a couple of my own comments as I went through and understood it better)
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
return Err(SubscriptionManagementError::BlockHashAbsent) | ||
} | ||
|
||
BlockGuard::new(hash, sub.runtime_updates, self.backend.clone()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like lock_block
completely by-pass the self pinned counter mechanisms.
In the end is not influenced by the global_max_pinned_blocks
and co.
Is this intended? Should't blocks pinned via this mechanism follow the same policy as the ones pinned using direct pin_block
and unpin_block
?
Nit: not 100% sure lock_block
is the proper name for a pin_block
with automatic unpin
when the BlockGuard goes out of context. But I can't even suggest anything better 😃
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, the lock_block
function bypasses the chainHead pinning counter.
The pin_block
reference counts the block once per subscription and will increase the counter
only on the first call. The other calls are advancing the state machine of the block.
We probably could use global_pin_block
and global_unpin_block
, however, we'd need a reference to the SubscriptionManagement
or add an extra lock and pass in a reference to SubscriptionsInner
.
That felt more involved and I've relied on the backend's pinning reference count for this
.subs | ||
.iter_mut() | ||
.filter_map(|(sub_id, sub)| { | ||
let sub_time = sub.find_oldest_block_timestamp(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have to choose which subscription to remove... shouldn't we have to find the newest block timestamp for a subscription? What I mean is if I have several subscriptions is not the most recent one that should be considered? Otherwise I risk to lower my pinning time for a new request just because an old one I previously made
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We had this discussion on the RPC spec (paritytech/json-rpc-interface-spec#33 (comment)) regarding how to terminate subscriptions when a "soft"/"hard" limit is reached.
As a conclusion, we picked ~ 1 minute of blocks after which the subscription could be terminated.
If the user does not unpin
blocks fast enough (1 block should not be pinned more than 1 minute) the subscription will be subject to termination.
Here I'm searching for the oldest block of a subscription, if that timestamp is older than 1 minute we can terminate that specific subscription.
I believe that also provides an incentive for the users to operate things at the head of chain, and fallback to archive
methods or the old RPC when they want to fetch historical data.
Otherwise I risk to lower my pinning time for a new request just because an old one I previously made
IIUC, if a subscription has 2 blocks: 1 block that is older than 1 minute and 1 newly added block, we risk to lower the pinning time for the second block, because the first one makes the subscription subject to termination.
Let me know if I got this right, thanks! 🙏
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, if a subscription has 2 blocks: 1 block that is older than 1 minute and 1 newly added block, we risk to lower the pinning time for the second block, because the first one makes the subscription subject to termination.
Yeah I was referring to this. But if it is documented and by design then fair enough
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Davide Galassi <davxy@datawok.net>
Co-authored-by: Davide Galassi <davxy@datawok.net>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
* rpc/chain_head: Add backend to subscription management Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Pin blocks internally and adjust testing Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * client/in_mem: Reference for the number of pinned blocks Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/tests: Check in-memory references to pinned blocks Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Fix clippy Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Remove unused comment Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Place subscription handle under `Arc` and unpin blocks on drop Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/tests: Check all pinned blocks are unpinned on drop Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Apply suggestions from code review Co-authored-by: Bastian Köcher <git@kchr.de> * Update client/rpc-spec-v2/src/chain_head/subscription.rs Co-authored-by: Bastian Köcher <git@kchr.de> * rpc/tests: Retry fetching the pinned references for CI correctness Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * client/service: Use 512 as maximum number of pinned blocks Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * chain_head: Fix merging conflicts Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Adjust subscriptions to use pinning API Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head/tests: Test subscription management Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Adjust chain_head follow to the new API Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Adjust chain_head.rs to the new API Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head/tests: Adjust test.rs to the new API Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * client/builder: Use new chainHead API Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Fix documentation Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Fix clippy Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * client/in_mem: ChainHead no longer uses `in_mem::children` Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update client/rpc-spec-v2/src/chain_head/subscription.rs Co-authored-by: Sebastian Kunert <skunert49@gmail.com> * Update client/rpc-spec-v2/src/chain_head/subscription.rs Co-authored-by: Sebastian Kunert <skunert49@gmail.com> * Update client/rpc-spec-v2/src/chain_head/subscription.rs Co-authored-by: Sebastian Kunert <skunert49@gmail.com> * Update client/rpc-spec-v2/src/chain_head/subscription.rs Co-authored-by: Sebastian Kunert <skunert49@gmail.com> * chain_head: Add block state machine Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Address feedback Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Use new_native_or_wasm_executor Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * chain_head: Remove 'static on Backend Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * chain_head: Add documentation Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * chain_head: Lock blocks before async blocks Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * chain_head_follower: Remove static on backend Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update client/service/src/builder.rs Co-authored-by: Davide Galassi <davxy@datawok.net> * Update client/service/src/builder.rs Co-authored-by: Davide Galassi <davxy@datawok.net> * chain_head: Add BlockHeaderAbsent to the PartialEq impl Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * client: Add better documentation around pinning constants Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * chain_head: Move subscription to dedicated module Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * subscription: Rename global pin / unpin functions Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: parity-processbot <> Co-authored-by: Sebastian Kunert <skunert49@gmail.com> Co-authored-by: Davide Galassi <davxy@datawok.net>
* rpc/chain_head: Add backend to subscription management Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Pin blocks internally and adjust testing Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * client/in_mem: Reference for the number of pinned blocks Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/tests: Check in-memory references to pinned blocks Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Fix clippy Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Remove unused comment Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Place subscription handle under `Arc` and unpin blocks on drop Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/tests: Check all pinned blocks are unpinned on drop Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Apply suggestions from code review Co-authored-by: Bastian Köcher <git@kchr.de> * Update client/rpc-spec-v2/src/chain_head/subscription.rs Co-authored-by: Bastian Köcher <git@kchr.de> * rpc/tests: Retry fetching the pinned references for CI correctness Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * client/service: Use 512 as maximum number of pinned blocks Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * chain_head: Fix merging conflicts Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Adjust subscriptions to use pinning API Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head/tests: Test subscription management Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Adjust chain_head follow to the new API Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Adjust chain_head.rs to the new API Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head/tests: Adjust test.rs to the new API Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * client/builder: Use new chainHead API Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Fix documentation Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * rpc/chain_head: Fix clippy Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * client/in_mem: ChainHead no longer uses `in_mem::children` Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update client/rpc-spec-v2/src/chain_head/subscription.rs Co-authored-by: Sebastian Kunert <skunert49@gmail.com> * Update client/rpc-spec-v2/src/chain_head/subscription.rs Co-authored-by: Sebastian Kunert <skunert49@gmail.com> * Update client/rpc-spec-v2/src/chain_head/subscription.rs Co-authored-by: Sebastian Kunert <skunert49@gmail.com> * Update client/rpc-spec-v2/src/chain_head/subscription.rs Co-authored-by: Sebastian Kunert <skunert49@gmail.com> * chain_head: Add block state machine Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Address feedback Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Use new_native_or_wasm_executor Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * chain_head: Remove 'static on Backend Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * chain_head: Add documentation Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * chain_head: Lock blocks before async blocks Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * chain_head_follower: Remove static on backend Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * Update client/service/src/builder.rs Co-authored-by: Davide Galassi <davxy@datawok.net> * Update client/service/src/builder.rs Co-authored-by: Davide Galassi <davxy@datawok.net> * chain_head: Add BlockHeaderAbsent to the PartialEq impl Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * client: Add better documentation around pinning constants Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * chain_head: Move subscription to dedicated module Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> * subscription: Rename global pin / unpin functions Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: Bastian Köcher <git@kchr.de> Co-authored-by: parity-processbot <> Co-authored-by: Sebastian Kunert <skunert49@gmail.com> Co-authored-by: Davide Galassi <davxy@datawok.net>
This PR leverages the new pinning API introduced by #13157.
BestBlock
and one from theFinalized
branch)unpin
.To ensure proper testing the in-memory
Backend
is extended to describe the number of active pinned references for a given block. An in-memoryBackend
instance is used for testing thechainHead
, therefore thechildren
method of the in-memory Backend is extended to return an empty list.CC: @paritytech/subxt-team