Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

core/authority-discovery: Enable authorities to discover each other #3452

Merged
merged 23 commits into from
Sep 6, 2019

Conversation

mxinden
Copy link
Contributor

@mxinden mxinden commented Aug 21, 2019

With the authority-discovery module an authoritative node makes itself
discoverable and is able to discover other authorities. Once discovered, a node
can directly connect to other authorities instead of multi-hop gossiping
information.

  1. Making itself discoverable

    1. Retrieve its external addresses

    2. Adds its network peer id to the addresses

    3. Sign the above

    4. Put the signature and the addresses on the libp2p Kademlia DHT

  2. Discovering other authorities

    1. Retrieve the current set of authorities

    2. Start DHT queries for the ids of the authorities

    3. Validate the signatures of the retrieved key value pairs

    4. Add the retrieved external addresses as reserved priority nodes to the
      peerset

Pull request status

Pull request is split into multiple self contained commits to make reviewing easier.

Replaces #3247. Addresses #1693.

This does not include the GRANDPA authorities. I suggest adding these in a subsequent pull request. See #3247 (comment) for details.

The srml/authority-discovery module implements the OneSessionHandler in
order to keep its authority set in sync. This commit adds the module to
the set of session handlers.
Instead of network worker implement the Future trait, have it implement
the Stream interface returning Dht events.

For now these events are ignored in build_network_future but will be
used by the core/authority-discovery module in subsequent commits.
@mxinden
Copy link
Contributor Author

mxinden commented Aug 21, 2019

@twittner can you give 94a238a a review, specifically the use of the Prost crate and the bounded channel between build_network_future and authority discovery.

@mxinden
Copy link
Contributor Author

mxinden commented Aug 21, 2019

@tomaka first of all this touches core/services. I am happy to rebase this on top of #3382. What would you suggest?

In addition can you give d34eb4f a review?

@mxinden
Copy link
Contributor Author

mxinden commented Aug 21, 2019

@arkpar following up on #3247 (comment) this is now using set_priority_group. Given that you have written lots of core/peerset, can you give 7de7db7 a review?

When the channel from network service to network worker is finished,
also finish the network worker stream given that both network worker and
network service have the same lifetime.
Authority discovery implements the future trait. Polling this future
always returns NotReady. The future uses Tokios timer crate to become
proactive periodically, e.g. advertising its own external addresses.

This commit ensures that the underlying Tokio task is always registered
at the Tokio Reactor to be woken up on the next interval tick. This is
achieved by making sure `interval.poll` returns `NotReady` at least
once within each `AuthorityDiscovery.poll` execution.
@arkpar
Copy link
Member

arkpar commented Aug 23, 2019

I'm a bit concerned that this uses protobuf-encoded messages, while all the other substrate-level protocols are SCALE-encoded.

My other concern is that it introduces too many calls into runtime. These are expensive. authorities can be cached by current best block hash at least. This may be optimized later though.

Looks good otherwise.

@mxinden
Copy link
Contributor Author

mxinden commented Aug 26, 2019

I'm a bit concerned that this uses protobuf-encoded messages, while all the other substrate-level protocols are SCALE-encoded.

I don't have strong opinions on this. I know that @twittner is reworking /network/src/protocol right now using Protobuf. In addition I think Protobuf is a lot easier for other implementation teams in other languages to be inter-operable with Substrate core.

@folsen
Copy link
Contributor

folsen commented Aug 26, 2019

I don't have strong opinions on this. I know that @twittner is reworking /network/src/protocol right now using Protobuf. In addition I think Protobuf is a lot easier for other implementation teams in other languages to be inter-operable with Substrate core.

Realistically though SCALE isn't going away, so now we'll just have SCALE and Protobuf. Unless we intend to refactor all SCALE stuff to Protobuf then I'd be against introducing Protobuf anywhere, including /network/src/protocol. Where does the reasoning/decision to refactor that come from? (cc @twittner)

@twittner
Copy link
Contributor

twittner commented Aug 26, 2019

I think the encoding was discussed during the networking workshop which I did not attend (cc @tomaka). If I understood correctly, the suggestion was to move the more general structures to protobuf or CBOR and keep some of their fields SCALE encoded which has the unfortunate consequence that encoding/decoding requires multiple steps which is more costly than one would hope. As for SCALE specifically, one aspect I would be worried about is this:

Tuples and Structures
A fixed-size series of values, each with a possibly different but predetermined and fixed type. This is simply the concatenation of each encoded value. [1]

Please correct me if I am wrong but this reads to me as if SCALE does not have any provisions for forwards and backwards compatibility.

@arkpar
Copy link
Member

arkpar commented Aug 26, 2019

Please correct me if I am wrong but this reads to me as if SCALE does not have any provisions for forwards and backwards compatibility.

That's correct, although I guess we could add versioning support to SCALE-derive eventually. See #1518. And since fields are still SCALE-encoded the compatibility issue is still there.

Also it would be good to get an idea of the size difference for a typical message being SCALE encoded vs protobuf encoded first.

@mxinden
Copy link
Contributor Author

mxinden commented Aug 27, 2019

Also it would be good to get an idea of the size difference for a typical message being SCALE encoded vs protobuf encoded first.

@arkpar for what it is worth within this use case a Dht payload with a single multi address, its peer identifier and a sr25519 signature is 68 bytes in Scale and 66 bytes in Protobuf.

@twittner
Copy link
Contributor

I guess we could add versioning support to SCALE-derive eventually.

Versioning is a separate aspect. If a change to a message allows older clients to process newer messages (backwards compatibility) and newer clients to process older messages (forwards compatibility) there is no need to introduce new versions with every change. For this to work, the change can only be optional (or needs to have a default value), so that newer clients can deal with the absence of it. Older clients in turn need to ignore the presence of information they do not know about. Instead of concatenating struct field values one would need to encode structs as key value pairs where the key does not necessarily have to be a string.

@twittner
Copy link
Contributor

twittner commented Aug 27, 2019

Also it would be good to get an idea of the size difference for a typical message being SCALE encoded vs protobuf encoded first.

I have created a small test repository, comparing the encoding sizes of SCALE, CBOR and Protocol Buffers: https://github.com/twittner/encoding-size

Sample run:

SCALE             = 148792 bytes (1)
CBOR              = 148940 bytes (1.0009947)
Protocol Buffers  = 148945 bytes (1.0010283)
CBOR (serde)      = 149153 bytes (1.0024261)
CBOR (serde miel) = 149101 bytes (1.0020767)

Edit: I think the size differences are neglectable.

Kademlia's default time-to-live for Dht records is 36h, republishing
records every 24h. Given that a node  could restart at any point in
time, one can not depend on the republishing process, thus starting to
publish  own external addresses should happen on an interval < 36h.

In addition have the first tick of the interval be at the beginning not
after an interval duration.
Abstract NetworkService via NetworkProvider trait to be able to mock it
within the unit tests. In addition add basic unit test for
`publish_own_ext_addresses`, `request_addresses_of_others` and
`handle_dht_events`.
Split the global `AuthorityDiscovery.interval` into two intervals:
`publish_interval` and `query_interval`. Dht entries of other
authorities can change at any point in time. Thereby one should query
more often than publish.
The authority discovery module treats an authority identifier as an
opaque string. Thus the type abstraction `AuthorityId` is unnecessary,
bloating the `core/service` construction code.
twittner added a commit to twittner/substrate that referenced this pull request Aug 30, 2019
Add two new `NetworkBehaviour`s, one handling remote block requests
and another one to handle light client requests (both local and from
remote). The change is motivated by the desire to use multiple
substreams of a single connection for different protocols. To achieve
this, libp2p's `OneShotHandler` is used as a protocol handler in each
behaviour. It will open a fresh substream for the duration of the
request and close it afterwards. For block requests, we currently only
handle incoming requests from remote and tests are missing. For light
client handling we support incoming requests from remote and also
ported a substantial amount of functionality over from
`light_dispatch.rs` (including several tests). However the result lacks
in at least two aspects:

(1) We require external updates w.r.t. the best block per peer and
currently nothing updates this information.
(2) We carry a lot of peer-related state around.

Both aspects could be simplified by externalising peer selection and
just requiring a specific peer ID where the request should be sent to.
We still have to maintain some peer related state due to the way
libp2p's swarm and network behaviour work (e.g. we must make sure to
always issue `NetworkBehaviourAction::SendEvent`s to peers we are
connected to, otherwise the actions die a silent death.

Another change implemented here is the use of protocol buffers as the
encoding for network messages. Certain individual fields of messages
are still SCALE encoded. There has been some discussion about this
in another PR (paritytech#3452), so
far without resolution.
client = { package = "substrate-client", path = "../../core/client" }
authority-discovery-primitives = { package = "substrate-authority-discovery-primitives", path = "./primitives", default-features = false }
codec = { package = "parity-scale-codec", default-features = false, version = "1.0.3" }
futures = "0.1.17"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the reason for sticking with this outdated version of futures? The current version in the 0.1 family is 0.1.28.

As an aside, I think it would be great if dependencies were sorted alphabetically.

Copy link
Contributor Author

@mxinden mxinden Sep 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No particular reason to using 0.1.17 other than other packages using the same:

grep -ri --include Cargo.toml "futures = "
node-template/Cargo.toml:futures = "0.1"
core/client/Cargo.toml:futures = { version = "0.1", optional = true }
core/network/Cargo.toml:futures = "0.1.17"
core/authority-discovery/Cargo.toml:futures = "0.1.17"
core/service/test/Cargo.toml:futures = "0.1"
core/service/Cargo.toml:futures = "0.1.17"
core/finality-grandpa/Cargo.toml:futures = "0.1"
core/cli/Cargo.toml:futures = "0.1.17"
core/rpc/Cargo.toml:futures = "0.1.17"
core/consensus/rhd/Cargo.toml:futures = "0.1.17"
node/rpc-client/Cargo.toml:futures = "0.1.26"
node/cli/Cargo.toml:futures = "0.1"
Cargo.toml:futures = "0.1"

Shouldn't we update them all at once?

More general question: Why are we ever specifying the semver patch version?


As an aside, I think it would be great if dependencies were sorted alphabetically.

I will do that.

Copy link
Contributor

@tomaka tomaka Sep 4, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More general question: Why are we ever specifying the semver patch version?

Considering that this is below 1.0, the general rule is that the patch version here is the equivalent of a minor version post-1.0.

If you use a feature that was added in futures 0.1.12 and you only depend on futures 0.1.11 for example, your code might compile thanks to your Cargo.lock but you still have things wrong.

This is too hard to enforce manually however, and there's no tooling to do that automatically.

test-client = { package = "substrate-test-runtime-client", path = "../../core/test-runtime/client" }
peerset = { package = "substrate-peerset", path = "../../core/peerset" }
parking_lot = { version = "0.9.0" }
tokio = { version = "0.1.11"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly to my previous comment, tokio < 0.2 is already at 0.1.22.

core/authority-discovery/src/error.rs Show resolved Hide resolved
//!
//! 2. **Discovers other authorities**
//!
//! 1. Retrieves the current set of authorities..
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
//! 1. Retrieves the current set of authorities..
//! 1. Retrieves the current set of authorities.

include!(concat!(env!("OUT_DIR"), "/authority_discovery.rs"));
}

/// A AuthorityDiscovery makes a given authority discoverable as well as discovers other authorities.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/// A AuthorityDiscovery makes a given authority discoverable as well as discovers other authorities.
/// An `AuthorityDiscovery` makes a given authority discoverable and discovers other authorities.

/// Retrieve authority identifiers of the current authority set.
fn authorities() -> Vec<AuthorityId>;

/// Sign the given payload with the private key corresponding to the given authority id.
fn sign(payload: Vec<u8>, authority_id: AuthorityId) -> Option<Vec<u8>>;
fn sign(payload: Vec<u8>) -> Option<(Signature, AuthorityId)>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is probably due to decl_runtime_apis! that this consumes the Vec<u8> instead of just borrowing a &[u8], right?

// events are being passed on to the authority-discovery module. In the future there might be multiple
// consumers of these events. In that case this would need to be refactored to properly dispatch the events,
// e.g. via a subscriber model.
if let Some(Err(e)) = dht_event_tx.as_ref().map(|c| c.clone().try_send(event)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would one not use dht_event_tx's Sink interface (start_send and poll_complete) to send an item and return Async::NotReady if the channel is full instead of using try_send?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@twittner: What would be the benefit of using the Sink interface over try_send directly?

To make sure my intention is clear: In my eyes authority discovery is not a critical component of Substrate, but the network is. I would like to not use unbounded buffers as those don't surface queuing problems. I also don't want network to be blocked by anything authority discovery related. Thus network drops dht events if the channel buffer is full.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was under the impression that you wanted to to wait if the channel is full, hence my recommendation to use the Sink interface. But given your statement:

I also don't want network to be blocked by anything authority discovery related.

I guess you are fine with dropping events, so just ignore the suggestion.

service.network(),
dht_event_rx,
);
let _ = service.spawn_task(Box::new(authority_discovery));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let _ = service.spawn_task(Box::new(authority_discovery));
service.spawn_task(Box::new(authority_discovery))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have copied this from other service.spawn_task() calls, e.g. see https://github.com/paritytech/substrate/blob/master/core/service/src/lib.rs#L252. I guess the sole purpose to do so is to prevent the Unused result warning. Do you still want it removed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code you linked to reads let _ = to_spawn_tx_.unbounded_send(future); which is something else entirely. The signature of spawn_task is:

fn spawn_task(&self, task: impl Future<Item = (), Error = ()> + Send + 'static);

Since it returns unit, there is no need to bind it to _.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, sorry about that. I implemented it inside core/service/src/lib.rs before.


// First we need to serialize the addresses in order to be able to sign them.
message AuthorityAddresses {
repeated string addresses = 1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any advantage to encode Multiaddresses as strings?

Suggested change
repeated string addresses = 1;
repeated bytes addresses = 1;

// Process incoming events before triggering new ones.
self.handle_dht_events()?;

if let Ok(Async::Ready(_)) = self.publish_interval.poll() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about timer errors? It seems these are just ignored. I am not sure if the interval would register the task if it encounters an error which means that this poll method might not be invoked again, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to Tokio timer documentation there are two kind of possible errors:

  • Shutdown: The timer instance is down. Retrying is useless.

  • At capacity: The timer instance is at its current capacity. It is worth retrying later on.

We could error on the former and implement retry logic with the latter. Retrying would need to be done cautiously e.g. with exponential backoff to not aggravate the timer being overloaded. But how to do exponential backoff without the notion of time?

Without much experience I would expect these errors to be rare.

@twittner: What do you think of logging an error in case one of the two possible timer errors occurs, but not reacting on it? Reducing complexity but still surfacing the fact that there is an issue.

@rphmeier
Copy link
Contributor

rphmeier commented Sep 3, 2019

@arkpar

authorities can be cached by current best block hash at least

This is wrong. BABE authorities can change based on time, not based on block. We should avoid these kinds of assumptions from leaking into the code anywhere.

I'm not convinced that a periodic runtime call outside the critical path is actually so expensive as to employ potentially-incorrect caching methods. The safest thing to do is always call the runtime.

@arkpar
Copy link
Member

arkpar commented Sep 3, 2019

@rphmeier Could you elaborate? I'm pretty sure we assume that runtime calls are deterministic in lots of places. I.e for a given blockchain state and inputs the call will always return the same result.

@rphmeier
Copy link
Contributor

rphmeier commented Sep 3, 2019

With the addition of the keystore and offchain worker externalities, that assumption isn't fair anymore.

But with consensus specifically, BABE or other Praos-family protocols will change authorities based on time. It may be that the runtime API interface here isn't accurate enough.

@arkpar
Copy link
Member

arkpar commented Sep 3, 2019

In this case authorities won't come from the runtime API indeed. I guess it is fine for now if the request interval is not too small.

Also do we really need to sign DHT messages with the runtime?

@mxinden
Copy link
Contributor Author

mxinden commented Sep 4, 2019

Also do we really need to sign DHT messages with the runtime?

@arkpar: Only the runtime knows the current session key. Thereby core/authority-discovery can't just sign dht messages within core with the keystore.

@mxinden
Copy link
Contributor Author

mxinden commented Sep 4, 2019

I have addressed all comments. Can someone take another look?

@mxinden
Copy link
Contributor Author

mxinden commented Sep 6, 2019

Thanks @arkpar. I will merge by the end of the day in case there are no further objections.

@mxinden mxinden merged commit 7fc21ce into paritytech:master Sep 6, 2019
@mxinden
Copy link
Contributor Author

mxinden commented Sep 6, 2019

Thanks for the help everyone. In case you have further comments I am happy to do a follow up pull request.

@mxinden
Copy link
Contributor Author

mxinden commented Sep 9, 2019

Follow up: A node can have two active session keys. Currently we only handle a single one. One should handle all within the current authority set.

andresilva pushed a commit that referenced this pull request Sep 17, 2019
…3452)

With the *authority-discovery* module an authoritative node makes itself
discoverable and is able to discover other authorities. Once discovered, a node
can directly connect to other authorities instead of multi-hop gossiping
information.

1. **Making itself discoverable**

    1. Retrieve its external addresses

    2. Adds its network peer id to the addresses

    3. Sign the above

    4. Put the signature and the addresses on the libp2p Kademlia DHT

2. **Discovering other authorities**

    1. Retrieve the current set of authorities

    2. Start DHT queries for the ids of the authorities

    3. Validate the signatures of the retrieved key value pairs

    4. Add the retrieved external addresses as ~reserved~ priority nodes to the
       peerset


* node/runtime: Add authority-discovery as session handler

The srml/authority-discovery module implements the OneSessionHandler in
order to keep its authority set in sync. This commit adds the module to
the set of session handlers.

* core/network: Make network worker return Dht events on poll

Instead of network worker implement the Future trait, have it implement
the Stream interface returning Dht events.

For now these events are ignored in build_network_future but will be
used by the core/authority-discovery module in subsequent commits.

* *: Add scaffolding and integration for core/authority-discovery module

* core/authority-discovery: Implement module logic itself
Demi-Marie pushed a commit to Demi-Marie/substrate that referenced this pull request Sep 17, 2019
…aritytech#3452)

With the *authority-discovery* module an authoritative node makes itself
discoverable and is able to discover other authorities. Once discovered, a node
can directly connect to other authorities instead of multi-hop gossiping
information.

1. **Making itself discoverable**

    1. Retrieve its external addresses

    2. Adds its network peer id to the addresses

    3. Sign the above

    4. Put the signature and the addresses on the libp2p Kademlia DHT

2. **Discovering other authorities**

    1. Retrieve the current set of authorities

    2. Start DHT queries for the ids of the authorities

    3. Validate the signatures of the retrieved key value pairs

    4. Add the retrieved external addresses as ~reserved~ priority nodes to the
       peerset


* node/runtime: Add authority-discovery as session handler

The srml/authority-discovery module implements the OneSessionHandler in
order to keep its authority set in sync. This commit adds the module to
the set of session handlers.

* core/network: Make network worker return Dht events on poll

Instead of network worker implement the Future trait, have it implement
the Stream interface returning Dht events.

For now these events are ignored in build_network_future but will be
used by the core/authority-discovery module in subsequent commits.

* *: Add scaffolding and integration for core/authority-discovery module

* core/authority-discovery: Implement module logic itself
twittner added a commit to twittner/substrate that referenced this pull request Jan 31, 2020
Add two new `NetworkBehaviour`s, one handling remote block requests
and another one to handle light client requests (both local and from
remote). The change is motivated by the desire to use multiple
substreams of a single connection for different protocols. To achieve
this, libp2p's `OneShotHandler` is used as a protocol handler in each
behaviour. It will open a fresh substream for the duration of the
request and close it afterwards. For block requests, we currently only
handle incoming requests from remote and tests are missing. For light
client handling we support incoming requests from remote and also
ported a substantial amount of functionality over from
`light_dispatch.rs` (including several tests). However the result lacks
in at least two aspects:

(1) We require external updates w.r.t. the best block per peer and
currently nothing updates this information.
(2) We carry a lot of peer-related state around.

Both aspects could be simplified by externalising peer selection and
just requiring a specific peer ID where the request should be sent to.
We still have to maintain some peer related state due to the way
libp2p's swarm and network behaviour work (e.g. we must make sure to
always issue `NetworkBehaviourAction::SendEvent`s to peers we are
connected to, otherwise the actions die a silent death.

Another change implemented here is the use of protocol buffers as the
encoding for network messages. Certain individual fields of messages
are still SCALE encoded. There has been some discussion about this
in another PR (paritytech#3452), so
far without resolution.
twittner added a commit to twittner/substrate that referenced this pull request Feb 10, 2020
Add two new `NetworkBehaviour`s, one handling remote block requests
and another one to handle light client requests (both local and from
remote). The change is motivated by the desire to use multiple
substreams of a single connection for different protocols. To achieve
this, libp2p's `OneShotHandler` is used as a protocol handler in each
behaviour. It will open a fresh substream for the duration of the
request and close it afterwards. For block requests, we currently only
handle incoming requests from remote and tests are missing. For light
client handling we support incoming requests from remote and also
ported a substantial amount of functionality over from
`light_dispatch.rs` (including several tests). However the result lacks
in at least two aspects:

(1) We require external updates w.r.t. the best block per peer and
currently nothing updates this information.
(2) We carry a lot of peer-related state around.

Both aspects could be simplified by externalising peer selection and
just requiring a specific peer ID where the request should be sent to.
We still have to maintain some peer related state due to the way
libp2p's swarm and network behaviour work (e.g. we must make sure to
always issue `NetworkBehaviourAction::SendEvent`s to peers we are
connected to, otherwise the actions die a silent death.

Another change implemented here is the use of protocol buffers as the
encoding for network messages. Certain individual fields of messages
are still SCALE encoded. There has been some discussion about this
in another PR (paritytech#3452), so
far without resolution.
tomaka pushed a commit that referenced this pull request Feb 12, 2020
* network: Use "one shot" protocol handler.

Add two new `NetworkBehaviour`s, one handling remote block requests
and another one to handle light client requests (both local and from
remote). The change is motivated by the desire to use multiple
substreams of a single connection for different protocols. To achieve
this, libp2p's `OneShotHandler` is used as a protocol handler in each
behaviour. It will open a fresh substream for the duration of the
request and close it afterwards. For block requests, we currently only
handle incoming requests from remote and tests are missing. For light
client handling we support incoming requests from remote and also
ported a substantial amount of functionality over from
`light_dispatch.rs` (including several tests). However the result lacks
in at least two aspects:

(1) We require external updates w.r.t. the best block per peer and
currently nothing updates this information.
(2) We carry a lot of peer-related state around.

Both aspects could be simplified by externalising peer selection and
just requiring a specific peer ID where the request should be sent to.
We still have to maintain some peer related state due to the way
libp2p's swarm and network behaviour work (e.g. we must make sure to
always issue `NetworkBehaviourAction::SendEvent`s to peers we are
connected to, otherwise the actions die a silent death.

Another change implemented here is the use of protocol buffers as the
encoding for network messages. Certain individual fields of messages
are still SCALE encoded. There has been some discussion about this
in another PR (#3452), so
far without resolution.

* Uncomment `Behaviour::light_client_request`.

* Add license headers.
jordy25519 pushed a commit to plugblockchain/plug-blockchain that referenced this pull request Feb 19, 2020
* Export GRANDPA AuthorityPair when full_crypto is enabled (#4872)

* Export crypto_full feature in primitives/finality-grandpa

* Export GRANDPA AuthorityPair when full_crypto is enabled

*  Create Benchmarking Setup for Identity Pallet #4695  (#4818)

* Starting

* closer

* Compiles!

* comments

* Create seperate mock

* Remove changes to test env

* Fix step calculation

* Add host function

* Add runtime api

* compiles

* Update to use offchain timestamp

* Gives a result

* added some CLI wip

* make generic

* Update instance

* Remove CLI stuff

* Remove last cli stuff

* undo more changes

* Update benchmarks

* Update Cargo.lock

* remove test

* Move loop out of runtime

* Benchmarking externalities

* Benchmarking state

* Implemented commit

* Make CLI work, move loop back into runtime

* Wipe resets to genesis

* Speedup benchmarks

* Use enum to select extrinsic within pallet

* CLI controls which module and extrinsic to call

* Select a pallet with cli

* Add steps and repeats to cli

* Output as CSV format

* Introduce benchmark pallet

* Append bench

* Use Results

* fix merge

* Clear Identity benchmark

* Bench request judgment and cancel request

* Add final benchmarks

* Fix CSV output

* Start cleaning up for PR

* Bump numbers in `wasmtime` integration tests.

* More docs

* Add rockdb feature to bench

* Fix formatting issues

* Add test feature to bench

* Add test feature to bench

* Add rocksdb feature flag

* Update bench.rs

Co-authored-by: Arkadiy Paronyan <arkady.paronyan@gmail.com>
Co-authored-by: Gavin Wood <github@gavwood.com>

* Fix vesting logic (#4864)

* Fix vesting logic

* Bump runtime version

* Docs.

* Update trie-db to the latest (#4874)

* Fix timer panics in the wasm light client (#4561)

* Make WASM browser thing compile

* Fix

* updated exit-future (github repo)

* Switch to broadcast crate

* Migrate client/cli

* Switch exit-future to modernize branch

* Small changes

* Switch to cargo version and fix fg tests

* fix basic-authorship

* Fix crash on grafana macro

* Fix grafana macro

* Switch node python version

* Disable record_metrics_slice in grafana macro on wasm

* Update client/grafana-data-source/src/lib.rs

* Revert "Update client/grafana-data-source/src/lib.rs"

This reverts commit 888009a8e0b7051bd4bfbbfdb0448bcf2e2aae93.

* Add wasm support for state machine

* Switch to my own libp2p version

* Revert "Switch to my own libp2p version"

This reverts commit ce613871b59264b3165b45c37943e6560240daa7.

* Revert "Add wasm support for state machine"

This reverts commit de7eaa0694d9534fc3b164621737968e9a6a7c5f.

* Add sc-browser

* Squash

* remove sc-browser

* Fix keystore on wasm

* stubs for removed functions to make env compatible with old runtimes

* Add test (that doesn't work)

* Fix build scripts

* Revert basic-authorship due to no panics

* Revert cli/informant

* Revert consensus

* revert offchain

* Update utils/browser/Cargo.toml

Co-Authored-By: Benjamin Kampmann <ben@gnunicorn.org>

* export console functions

* Add new chainspec

* Fix ws in chain spec

* revert chainspec

* Fix chainspec

* Use an Option<PathBuf> in keystore instead of cfg flags

* Remove crud

* Only use wasm-timer for instant and systemtime

* Remove telemetry changes

* Assuming this is ok

* Add a KeystoreConfig

* Add stubs back in

* Update libp2p

* Revert "Add stubs back in"

This reverts commit 4690cf1882aa0f99f7f00a58c4080c8aa9b77c36.

* Remove commented js again

* Bump kvdb-web version

* Fix cli

* Switch branch on futures-timer

* Fix tests

* Remove sc-client test build in check-web-wasm because there isn't a good way to build futures-timer with wasm-bindgen support in the build

* Remove more things ^^

* Switch branch on futures-timer back

* Put DB io stats behind a cfg flag

* Fix things

* Don't timeout transports on wasm

* Update branch of futures-timer and fix bad merge

* Spawn informant

* Fix network test

* Fix delay resets

* Changes

* Fix tests

* use wasm_timer for transaction pool

* Fixes

* Switch futures-timer to crates

* Only diagnose futures on native

* Fix sc-network-test tests

* Select log level in js

* Fix syncing ;^)

* Allow disabling colours in the informant

* Use OutputFormat enum for informant

* MallocSizeOf impl on transaction pool broke stuff because wasm_timer::Instant doesnt impl it so just revert the transaction pool to master

* Update futures-diagnose

* Revert "MallocSizeOf impl on transaction pool broke stuff because wasm_timer::Instant doesnt impl it so just revert the transaction pool to master"

This reverts commit baa4ffc94fd968b6660a2c17ba8113e06af15548.

* Pass whole chain spec in start_client

* Get Instant::now to work in transaction pool again

* Informant dep reordering

Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
Co-authored-by: Benjamin Kampmann <ben.kampmann@googlemail.com>
Co-authored-by: Demi Obenour <48690212+DemiMarie-parity@users.noreply.github.com>

* Don't expose `Benchmarking` host functions by default (#4875)

* Don't expose `Benchmarking` host functions by default

* Fix tests

Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>

* Add trace on import block. (#4871)

* Refactor and document allocator (#4855)

* Clarify code a bit.

* Move code around.

* Introduce `Order`.

* Introduce `Link` structure.

* Get rid of ptr_offset

This is beneficial since ptr_offset is essentially makes us handle two different address spaces, global (i.e. `mem`) and heap local and without it things are becoming simpler.

* Rename PREFIX_SIZE to HEADER_SIZE.

This will come in the next commits.

* Introduce a separate `Memory` trait.

This is not necessary, but will come in handy for the upcoming changes.

* Rename `ptr` to `header_ptr` where makes sense.

* Introduce a `Header` type.

* Make `bump` dumber.

This allows us to pull `HEADER_SIZE` to see that we actually allocate `order.size() + HEADER_SIZE`.

* Clean up.

* Introduce a freelists struct.

* Update documentation.

* Make Sized requirement optional to make the PR truly back-compatible.

* Apply suggestions from code review

Co-Authored-By: Gavin Wood <gavin@parity.io>

* Apply suggestions from code review

Co-Authored-By: Bastian Köcher <bkchr@users.noreply.github.com>

Co-authored-by: Gavin Wood <github@gavwood.com>
Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

* Avoid challenging those that can't be suspended anyway (#4804)

* Merge branch 'gav-split-balanecs-vesting' into gav-upsub

# Conflicts:
#	Cargo.lock
#	cli/Cargo.toml
#	collator/Cargo.toml
#	primitives/Cargo.toml
#	runtime/common/Cargo.toml
#	runtime/common/src/claims.rs
#	runtime/kusama/Cargo.toml
#	runtime/polkadot/Cargo.toml
#	service/Cargo.toml

* Fix tests

* Add trait to get module and call names. (#4854)

* Add trait to get module and call names.


Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

* Fix runtime-interface tests on windows (#4805)

* update primitive types to 0.6.2 (#4866)

* Run offchain workers at hash, not number. (#4878)

* Run offchain workers at particular hash, not number.

* Don't run if not new best.

* Don't run if not new best.

* Update client/service/src/builder.rs

Co-Authored-By: Nikolay Volf <nikvolf@gmail.com>

* Update client/service/src/builder.rs

Co-Authored-By: Nikolay Volf <nikvolf@gmail.com>

* Update client/service/src/builder.rs

Co-authored-by: Nikolay Volf <nikvolf@gmail.com>

* Use prefixed iterator from trie. (#4858)

* Add support for json output in subkey (#4882)

* Add support for json output in subkey

* Updates as per code review

* Apply suggestions from code review

Co-Authored-By: Nikolay Volf <nikvolf@gmail.com>

* Apply suggestions from code review

Co-Authored-By: Bastian Köcher <bkchr@users.noreply.github.com>

* Clean up error handler as per code review

* Apply suggestions from code review

Co-Authored-By: Marcio Diaz <marcio@parity.io>

* Fix compilation error

* Remove accidental file commit

Co-authored-by: Nikolay Volf <nikvolf@gmail.com>
Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
Co-authored-by: Marcio Diaz <marcio@parity.io>

* impl Randomness trait for Babe and remove unused RandomBeacon trait (#4886)

* impl Randomness trait for Babe and remove unused RandomBeacon trait

* bump runtime version

* Pause Kademlia if too many connections (#4828)

* Pause Kademlia if too many connections

* Fix test

* Update client/network/src/discovery.rs

Co-Authored-By: Toralf Wittner <tw@dtex.org>

* Change the limit

Co-authored-by: Toralf Wittner <tw@dtex.org>

* Refactor tx factory 1 (#4870)

* Remove modes.

* Refactor.

* pallet-evm: optional nonce parameter (#4893)

* pallet-evm: optional nonce parameter

* Consume all gases when nonce mismatches

* Bump node runtime version

* Increase the penality for being offline (#4889)

* Add a sub command to generate a node key file (#4884)

* Add a sub command to generate a node key file in the format required by a substrate node

* Update lock file

* Apply suggestions from code review

Co-Authored-By: Bastian Köcher <bkchr@users.noreply.github.com>

* Updates as per code review

Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

* Benchmark Timestamp Pallet (#4891)

* Add selected_benchmark! macro.

* Use selected_benchmark! in Identity pallet.

* Implement timestamp pallet benchmark.

* Fix some nits.

* Bump impl_version.

* Add command-line flag to enable yamux flow control. (#4892)

* Add command-line flag to enable yamux flow control.

We never enabled proper flow-control for yamux streams which may cause
stream buffers to exceed their configured limit when the stream
producer outpaces the stream consumer. By switching the window update
mode to on-read, producers will only receive more sending credit when
all data has been consumed from the stream buffer. Using this option
creates backpressure on producers. However depending on the protocol
there is a risk of deadlock, if both endpoints concurrently attempt to
send more data than they have credit for and neither side reads before
finishing their writes. To facilitate proper testing, this PR adds a
command-line flag `use-yamux-flow-control`.

* Replace comment with generic message.

* Do not allow zero Existential Deposit when using Balances (#4894)

* Add non-zero ed check on Balances genesis

* Update ED from 0 to 1

* bump impl

* bump spec

* Found remove more ed = 0

* Fix some contract tests

* Use ctx.overlay.set_balance for contracts

* Fix staking test

* Remove obsolete logic

* Allow death of payout account in society

* Update frame/balances/src/lib.rs

Co-Authored-By: Bastian Köcher <bkchr@users.noreply.github.com>

* Dont create genesis balances if balance is zero in transaction payment pallet

Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
Co-authored-by: Gavin Wood <github@gavwood.com>

* network: Use "one shot" protocol handler. (#3520)

* network: Use "one shot" protocol handler.

Add two new `NetworkBehaviour`s, one handling remote block requests
and another one to handle light client requests (both local and from
remote). The change is motivated by the desire to use multiple
substreams of a single connection for different protocols. To achieve
this, libp2p's `OneShotHandler` is used as a protocol handler in each
behaviour. It will open a fresh substream for the duration of the
request and close it afterwards. For block requests, we currently only
handle incoming requests from remote and tests are missing. For light
client handling we support incoming requests from remote and also
ported a substantial amount of functionality over from
`light_dispatch.rs` (including several tests). However the result lacks
in at least two aspects:

(1) We require external updates w.r.t. the best block per peer and
currently nothing updates this information.
(2) We carry a lot of peer-related state around.

Both aspects could be simplified by externalising peer selection and
just requiring a specific peer ID where the request should be sent to.
We still have to maintain some peer related state due to the way
libp2p's swarm and network behaviour work (e.g. we must make sure to
always issue `NetworkBehaviourAction::SendEvent`s to peers we are
connected to, otherwise the actions die a silent death.

Another change implemented here is the use of protocol buffers as the
encoding for network messages. Certain individual fields of messages
are still SCALE encoded. There has been some discussion about this
in another PR (paritytech/substrate#3452), so
far without resolution.

* Uncomment `Behaviour::light_client_request`.

* Add license headers.

* Benchmark the Balances Pallet (#4879)

* Initial transfer bench

* Add best case

* Transfer keep alive

* Set balance benchmarks

* Bump impl

* Fix text

Co-authored-by: Gavin Wood <github@gavwood.com>

* client/network-gossip: Integrate GossipEngine tasks into Future impl (#4767)

`GossipEngine` spawns two tasks, one for a periodic tick, one to forward
messages from the network to subscribers. These tasks hold an `Arc` to a
`GossipEngineInner`.

To reduce the amount of shared ownership (locking) this patch integrates
the two tasks into a `Future` implementation on the `GossipEngine`
struct. This `Future` implementation can now be called from a single
owner, e.g. the `finality-grandpa` `NetworkBridge`.

As a side effect this removes the requirement on the `network-gossip`
crate to spawn tasks and thereby removes the requirement on the
`finality-grandpa` crate to spawn any tasks.

This is part of a greater effort to reduce the number of owners of
components within `finality-grandpa`, `network` and `network-gossip` as
well as to reduce the amount of unbounded channels. For details see
d4fbb89, f0c1852 and 5afc777.

* add some more docs on PreRuntime digests (#4896)

* serialize partial_fee into string (#4898)

* serialize partial_fee into string

* implement deserialize

* bump version

* add sr25519 bench (#4905)

* Fix chain-spec and make sure it does not breaks again (#4906)

* Fix chain-spec and make sure it does not breaks again

* REview feedback

* Full block import benchmark (#4865)

* full block import benchmark

* try rocksdb cache

* add profiling helper

* use random keyring instead of zero caching

* update docs

* add more io stats

* remove last sentence

* add ci job to see

* Update primitives/keyring/src/sr25519.rs

Co-Authored-By: Marcio Diaz <marcio.diaz@gmail.com>

* switch to 100tx-block

* remove ci script

Co-authored-by: Marcio Diaz <marcio@parity.io>

* Per-things trait. (#4904)

* Give perthigns the trait it always deserved.

* Make staking and phragmen work with the new generic per_thing

* Make everything work together 🔨

* a bit of cleanup

* Clean usage

* Bump.

* Fix name

* fix grumbles

* hopefully fix the ui test

* Some grumbles

* revamp traits again

* Better naming again.

* executor: Migrate wasmtime backend to a high-level API (#4686)

* Migrate wasmtime backend to wasmtime-api

* Port to a newer version of wasmtime

* Update to the latest changes.

* Rejig the sandbox module a bit

* Materialze

* Fixes.

* executor wasm_runtime fix

* Refactor everything

* More refactoring

* Even more refactorings

* More cleaning.

* Update to the latest wasmtime

* Reformat

* Renames

* Refactoring and comments.

* Docs

* Rename FunctionExecutor to host.

* Imrpove docs.

* fmt

* Remove panic

* Assert the number of arguments are equal between wasmtime and hostfunc.

* Comment a possible panic if there is no corresponding value variant.

* Check signature of the entrypoint.

* Use git version of wasmtime

* Refine and doc the sandbox code.

* Comment RefCells.

* Update wasmtime to the latest-ish master.

This may solve a problem with segfaults.

* Apply suggestions from code review

Co-Authored-By: Tomasz Drwięga <tomusdrw@users.noreply.github.com>

* Use full SHA1 hash of wasmtime commit.

* Add a panic message.

* Add some documentation

* Update wasmtime version to include SIGSEGV fix

* Update to crates.io version of wasmtime

* Make it work.

* Move the creation of memory into `InstanceWrapper::new`

* Make `InstanceWrapper` !Send & !Sync

* Avoid using `take_mut`

* Update client/executor/wasmtime/Cargo.toml

Co-Authored-By: Bastian Köcher <bkchr@users.noreply.github.com>

* Limit maximum size of memory.

* Rename `init_state` to `with_initialized_state`

Co-authored-by: Tomasz Drwięga <tomusdrw@users.noreply.github.com>
Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

* reference sc-service with rocksdb feature (#4918)

* pallet-evm: add support for transaction-level create2 (#4907)

* pallet-evm: add support for transaction-level create2

* Bump runtime version

* Switch to FunctionOf for weights

* pallet-evm: refactor duplicate code in call/create/create2 (#4922)

* pallet-evm: refactor duplicate code in call/create/create2

* Bump runtime version

* Adds a test to ensure that we clear the heap between calls into runtime (#4903)

* Adds a test to ensure that we clear the heap between calls into runtime

The tests shows that we currently not clearing the heap in wasmtime.
For now we don't run the test for wasmtime.

* Fix compilation

* Composite accounts (#4820)

* Basic account composition.

* Add try_mutate_exists

* De-duplicate

* Refactor away the UpdateBalanceOutcome

* Expunge final UpdateBalanceOutcome refs

* Refactor transfer

* Refactor reservable currency stuff.

* Test with the alternative setup.

* Fixes

* Test with both setups.

* Fixes

* Fix

* Fix macros

* Make indices opt-in

* Remove CreationFee, and make indices opt-in.

* Fix construct_runtime

* Fix last few bits

* Fix tests

* Update trait impls

* Don't hardcode the system event

* Make tests build and fix some stuff.

* Pointlessly bump runtime version

* Fix benchmark

* Another fix

* Whitespace

* Make indices module economically safe

* Migrations for indices.

* Fix

* Whilespace

* Trim defunct migrations

* Remove unused storage item

* More contains_key fixes

* Docs.

* Bump runtime

* Remove unneeded code

* Fix test

* Fix test

* Update frame/balances/src/lib.rs

Co-Authored-By: Shawn Tabrizi <shawntabrizi@gmail.com>

* Fix ED logic

* Repatriate reserved logic

* Typo

* Fix typo

* Update frame/system/src/lib.rs

Co-Authored-By: Shawn Tabrizi <shawntabrizi@gmail.com>

* Update frame/system/src/lib.rs

Co-Authored-By: Shawn Tabrizi <shawntabrizi@gmail.com>

* Last few fixes

* Another fix

* Build fix

Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
Co-authored-by: Jaco Greeff <jacogr@gmail.com>
Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>

* Allow to distinguish out of gas from other traps (#4883)

* contracts: Allow to distinguish out of gas from other traps

When a contract encounters a runtime error a wasm trap is
triggered and the execution is halted. Currently, no matter
what was the cause for the trap it is always reported as:
DispatchError::Other("contract trapped during execution").

However, the trap that is triggered if a contract exhausts
its gas budget is particulary interesting. Therefore we add
a seperate error message for this cause:
DispatchError::Other("ran out of gas during contract execution").

A test is added hat executes a contract that never terminates.
Therefore it always exhausts is gas budget.

* fixup! contracts: Allow to distinguish out of gas from other traps

Remove overlong lines.

* fixup! contracts: Allow to distinguish out of gas from other traps

Rename Contract to Contracts

* Adds fork-awareness and finalization notifications to transaction pool watchers. (#4740)

* adds finalization support to sc-transaction-pool using MaintainedTransactionPool for finalization events

* adds TransactionStatus::Retracted, notify watchers of retracted blocks, finalized now finalizes, transactions for current finalized -> last finalized block

* adds last_finalized to ChainApi, use generic BlockT for ChainEvent

* fix tests

* Apply suggestions from code review

Co-Authored-By: Tomasz Drwięga <tomusdrw@users.noreply.github.com>

* tests

* fix tests, docs, lazily dedupe pruned hashes

* fix tests, Cargo.lock

* Apply suggestions from code review

Co-Authored-By: Tomasz Drwięga <tomusdrw@users.noreply.github.com>

* remove tree_route, last_finalized from ChainApi, add block hash to Finalization and Retracted events

* prune finality watchers

* fix tests

* remove HeaderBackend bound from FullChainApi

* code style nits, terminate stream in finality_timeout

Co-authored-by: Tomasz Drwięga <tomusdrw@users.noreply.github.com>

* Adds `with_pair!` macro to application-crypto (#4885)

* Adds `with_pair!` macro to application-crypto

This macro will "generate" the given code only when the crypto pair is
available. So, when either the `std` or the `full_crypto` feature is
enabled.

* Fix example

* Remove rename for finalized event and add some docs. (#4930)

* Remove rename for finalized event.

* ADd some docs.

* Document timeout.

* Reverted #4820 (substrate), add doughnut and delegatedDispatchVerifier to substrate runtime declarations

Co-authored-by: h4x3rotab <h4x3rotab@gmail.com>
Co-authored-by: Shawn Tabrizi <shawntabrizi@gmail.com>
Co-authored-by: Arkadiy Paronyan <arkady.paronyan@gmail.com>
Co-authored-by: Gavin Wood <github@gavwood.com>
Co-authored-by: Cecile Tonglet <cecile.tonglet@cecton.com>
Co-authored-by: Ashley <ashley.ruglys@gmail.com>
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
Co-authored-by: Svyatoslav Nikolsky <svyatonik@gmail.com>
Co-authored-by: Benjamin Kampmann <ben.kampmann@googlemail.com>
Co-authored-by: Demi Obenour <48690212+DemiMarie-parity@users.noreply.github.com>
Co-authored-by: Marcio Diaz <marcio@parity.io>
Co-authored-by: Sergei Pepyakin <s.pepyakin@gmail.com>
Co-authored-by: Nikolay Volf <nikvolf@gmail.com>
Co-authored-by: Tomasz Drwięga <tomusdrw@users.noreply.github.com>
Co-authored-by: cheme <emericchevalier.pro@gmail.com>
Co-authored-by: Hayden Bakkum <53467950+hbakkum-dotstar@users.noreply.github.com>
Co-authored-by: Robert Habermeier <rphmeier@gmail.com>
Co-authored-by: Toralf Wittner <tw@dtex.org>
Co-authored-by: Wei Tang <accounts@that.world>
Co-authored-by: Max Inden <mail@max-inden.de>
Co-authored-by: Xiliang Chen <xlchen1291@gmail.com>
Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>
Co-authored-by: Jaco Greeff <jacogr@gmail.com>
Co-authored-by: Alexander Theißen <alex.theissen@me.com>
Co-authored-by: Seun Lanlege <seunlanlege@gmail.com>
jordy25519 pushed a commit to plugblockchain/plug-blockchain that referenced this pull request Feb 27, 2020
* network: Use "one shot" protocol handler.

Add two new `NetworkBehaviour`s, one handling remote block requests
and another one to handle light client requests (both local and from
remote). The change is motivated by the desire to use multiple
substreams of a single connection for different protocols. To achieve
this, libp2p's `OneShotHandler` is used as a protocol handler in each
behaviour. It will open a fresh substream for the duration of the
request and close it afterwards. For block requests, we currently only
handle incoming requests from remote and tests are missing. For light
client handling we support incoming requests from remote and also
ported a substantial amount of functionality over from
`light_dispatch.rs` (including several tests). However the result lacks
in at least two aspects:

(1) We require external updates w.r.t. the best block per peer and
currently nothing updates this information.
(2) We carry a lot of peer-related state around.

Both aspects could be simplified by externalising peer selection and
just requiring a specific peer ID where the request should be sent to.
We still have to maintain some peer related state due to the way
libp2p's swarm and network behaviour work (e.g. we must make sure to
always issue `NetworkBehaviourAction::SendEvent`s to peers we are
connected to, otherwise the actions die a silent death.

Another change implemented here is the use of protocol buffers as the
encoding for network messages. Certain individual fields of messages
are still SCALE encoded. There has been some discussion about this
in another PR (paritytech/substrate#3452), so
far without resolution.

* Uncomment `Behaviour::light_client_request`.

* Add license headers.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A0-please_review Pull request needs code review.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants