Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from paritytech:master #75

Open
wants to merge 118 commits into
base: master
Choose a base branch
from
Open

Conversation

pull[bot]
Copy link

@pull pull bot commented Jan 26, 2025

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.1)

Can you help keep this open source service alive? 💖 Please sponsor : )

@pull pull bot added the ⤵️ pull label Jan 26, 2025
dmitry-markin and others added 28 commits January 27, 2025 12:29
…rd-compatible) (#7344)

Revert #7011 and replace
it with a backward-compatible solution suitable for backporting to a
release branch.

### Review notes
It's easier to review this PR per commit: the first commit is just a
revert, so it's enough to review only the second one, which is almost a
one-liner.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
closes #5978

---------

Co-authored-by: command-bot <>
Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
…mni Node compatibility (#6529)

# Description

This PR adds development chain specs for the minimal and parachain
templates.
[#6334](#6334)


## Integration

This PR adds development chain specs for the minimal and para chain
template runtimes, ensuring synchronization with runtime code. It
updates zombienet-omni-node.toml, zombinet.toml files to include valid
chain spec paths, simplifying configuration for zombienet in the
parachain and minimal template.

## Review Notes

1. Overview of Changes:
- Added development chain specs for use in the minimal and parachain
template.
- Updated zombienet-omni-node.toml and zombinet.toml files in the
minimal and parachain templates to include paths to the new dev chain
specs.

2. Integration Guidance:
**NB: Follow the templates' READMEs from the polkadot-SDK master branch.
Please build the binaries and runtimes based on the polkadot-SDK master
branch.**
- Ensure you have set up your runtimes `parachain-template-runtime` and
`minimal-template-runtime`
- Ensure you have installed the nodes required ie
`parachain-template-node` and `minimal-template-node`
- Set up [Zombinet](https://paritytech.github.io/zombienet/intro.html)
- For running the parachains, you will need to install the polkadot
`cargo install --path polkadot` remember from the polkadot-SDK master
branch.
- Inside the template folders minimal or parachain, run the command to
start with `Zombienet with Omni Node`, `Zombienet with
minimal-template-node` or `Zombienet with parachain-template-node`

*Include your leftover TODOs, if any, here.*
* [ ] Test the syncing of chain specs with runtime's code.

---------

Signed-off-by: EleisonC <ckalule7@gmail.com>
Co-authored-by: Iulian Barbu <14218860+iulianbarbu@users.noreply.github.com>
Co-authored-by: Alexander Samusev <41779041+alvicsam@users.noreply.github.com>
#Description
Migrated polkadot-runtime-parachains slots benchmarking to the new
benchmarking syntax v2.
This is part of #6202

---------

Co-authored-by: Giuseppe Re <giuseppe.re@parity.io>
Co-authored-by: seemantaggarwal <32275622+seemantaggarwal@users.noreply.github.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
Resolves (partially):
#7148 (see _Problem 1 -
`ShouldExecute` tuple implementation and `Deny` filter tuple_)

This PR changes the behavior of `DenyThenTry` from the pattern
`DenyIfAllMatch` to `DenyIfAnyMatch` for the tuple.

I would expect the latter is the right behavior so make the fix in
place, but we can also add a dedicated Impl with the legacy one
untouched.

## TODO
- [x] add unit-test for `DenyReserveTransferToRelayChain`
- [x] add test and investigate/check `DenyThenTry` as discussed
[here](#6838 (comment))
and update documentation if needed

---------

Co-authored-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Co-authored-by: command-bot <>
Co-authored-by: Clara van Staden <claravanstaden64@gmail.com>
Co-authored-by: Adrian Catangiu <adrian@parity.io>
…aling (#6983)

On top of #6757

Fixes #6858 by bumping
the `PARENT_SEARCH_DEPTH` constant to a larger value (30) and adds a
zombienet-sdk test that exercises the 12-core scenario.

This is a node-side limit that restricts the number of allowed pending
availability candidates when choosing the parent parablock during
authoring.
This limit is rather redundant, as the parachain runtime already
restricts the unincluded segment length to the configured value in the
[FixedVelocityConsensusHook](https://github.com/paritytech/polkadot-sdk/blob/88d900afbff7ebe600dfe5e3ee9f87fe52c93d1f/cumulus/pallets/aura-ext/src/consensus_hook.rs#L35)
(which ideally should be equal to this `PARENT_SEARCH_DEPTH`).

For 12 cores, a value of 24 should be enough, but I bumped it to 30 to
have some extra buffer.

There are two other potential ways of fixing this:
- remove this constant altogether, as the parachain runtime already
makes those guarantees. Chose not to do this, as it can't hurt to have
an extra safeguard
- set this value to be equal to the uninlcuded segment size. This value
however is not exposed to the node-side and would require a new runtime
API, which seems overkill for a redundant check.

---------

Co-authored-by: Javier Viola <javier@parity.io>
This PR changes how we call runtime API methods with more than 6
arguments: They are no longer spilled to the stack but packed into
registers instead. Pointers are 32 bit wide so we can pack two of them
into a single 64 bit register. Since we mostly pass pointers, this
technique effectively increases the number of arguments we can pass
using the available registers.

To make this work for `instantiate` too we now pass the code hash and
the call data in the same buffer, akin to how the `create` family
opcodes work in the EVM. The code hash is fixed in size, implying the
start of the constructor call data.

---------

Signed-off-by: xermicus <cyrill@parity.io>
Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>
Co-authored-by: command-bot <>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Alexander Theißen <alex.theissen@me.com>
Closes #216.

This PR allows pallets to define a `view_functions` impl like so:

```rust
#[pallet::view_functions]
impl<T: Config> Pallet<T>
where
	T::AccountId: From<SomeType1> + SomeAssociation1,
{
	/// Query value no args.
	pub fn get_value() -> Option<u32> {
		SomeValue::<T>::get()
	}

	/// Query value with args.
	pub fn get_value_with_arg(key: u32) -> Option<u32> {
		SomeMap::<T>::get(key)
	}
}
```
### `QueryId`

Each view function is uniquely identified by a `QueryId`, which for this
implementation is generated by:

```twox_128(pallet_name) ++ twox_128("fn_name(fnarg_types) -> return_ty")```

The prefix `twox_128(pallet_name)` is the same as the storage prefix for pallets and take into account multiple instances of the same pallet.

The suffix is generated from the fn type signature so is guaranteed to be unique for that pallet impl. For one of the view fns in the example above it would be `twox_128("get_value_with_arg(u32) -> Option<u32>")`. It is a known limitation that only the type names themselves are taken into account: in the case of type aliases the signature may have the same underlying types but a different id; for generics the concrete types may be different but the signatures will remain the same.

The existing Runtime `Call` dispatchables are addressed by their concatenated indices `pallet_index ++ call_index`, and the dispatching is handled by the SCALE decoding of the `RuntimeCallEnum::PalletVariant(PalletCallEnum::dispatchable_variant(payload))`. For `view_functions` the runtime/pallet generated enum structure is replaced by implementing the `DispatchQuery` trait on the outer (runtime) scope, dispatching to a pallet based on the id prefix, and the inner (pallet) scope dispatching to the specific function based on the id suffix.

Future implementations could also modify/extend this scheme and routing to pallet agnostic queries.

### Executing externally

These view functions can be executed externally via the system runtime api:

```rust
pub trait ViewFunctionsApi<QueryId, Query, QueryResult, Error> where
	QueryId: codec::Codec,
	Query: codec::Codec,
	QueryResult: codec::Codec,
	Error: codec::Codec,
{
	/// Execute a view function query.
fn execute_query(query_id: QueryId, query: Query) -> Result<QueryResult,
Error>;
}
```
### `XCQ`
Currently there is work going on by @xlc to implement [`XCQ`](https://github.com/open-web3-stack/XCQ/) which may eventually supersede this work.

It may be that we still need the fixed function local query dispatching in addition to XCQ, in the same way that we have chain specific runtime dispatchables and XCM.

I have kept this in mind and the high level query API is agnostic to the underlying query dispatch and execution. I am just providing the implementation for the `view_function` definition.

### Metadata
Currently I am utilizing the `custom` section of the frame metadata, to avoid modifying the official metadata format until this is standardized.

### vs `runtime_api`
There are similarities with `runtime_apis`, some differences being:
- queries can be defined directly on pallets, so no need for boilerplate declarations and implementations
- no versioning, the `QueryId` will change if the signature changes. 
- possibility for queries to be executed from smart contracts (see below)

### Calling from contracts
Future work would be to add `weight` annotations to the view function queries, and a host function to `pallet_contracts` to allow executing these queries from contracts.

### TODO

- [x] Consistent naming (view functions pallet impl, queries, high level api?)
- [ ] End to end tests via `runtime_api`
- [ ] UI tests
- [x] Mertadata tests
- [ ] Docs

---------

Co-authored-by: kianenigma <kian@parity.io>
Co-authored-by: James Wilson <james@jsdw.me>
Co-authored-by: Giuseppe Re <giuseppe.re@parity.io>
Co-authored-by: Guillaume Thiolliere <guillaume.thiolliere@parity.io>
The old error message was often confusing, because the real reason for
the error will be printed during inherent execution.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…al addresses (#7338)

Instead of using libp2p-provided external address candidates,
susceptible to address translation issues, use litep2p-backend approach
based on confirming addresses observed by multiple peers as external.

Fixes #7207.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
- remove old bench from cmd.py and left alias for backward compatibility
- reverted the frame-wight-template as the problem was that it umbrella
template wasn't picked correctly in the old benchmarks, in
frame-omni-bench it correctly identifies the dependencies and uses
correct template

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR modifies `named_reserve()` in frame-balances to use checked math
instead of defensive saturating math.

The use of saturating math relies on the assumption that the sum of the
values will always fit in `u128::MAX`. However, there is nothing
preventing the implementing pallet from passing a larger value which
overflows. This can happen if the implementing pallet does not validate
user input and instead relies on `named_reserve()` to return an error
(this saves an additional read)

This is not a security concern, as the method will subsequently return
an error thanks to `<Self as ReservableCurrency<_>>::reserve(who,
value)?;`. However, the `defensive_saturating_add` will panic in
`--all-features`, creating false positive crashes in fuzzing operations.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR implements the block author API method. Runtimes ought to
implement it such that it corresponds to the `coinbase` EVM opcode.

---------

Signed-off-by: xermicus <cyrill@parity.io>
Signed-off-by: Cyrill Leutwiler <bigcyrill@hotmail.com>
Co-authored-by: command-bot <>
Co-authored-by: Alexander Theißen <alex.theissen@me.com>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description

Migrating cumulus-pallet-session-benchmarking to the new benchmarking
syntax v2.
This is a part of #6202

---------

Co-authored-by: seemantaggarwal <32275622+seemantaggarwal@users.noreply.github.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
This PR contains small fixes and backwards compatibility issues
identified during work on the larger PR:
#6906.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Related to:
#7295 (comment)

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Adrian Catangiu <adrian@parity.io>
…rks and testing (#7379)

# Description

Currently benchmarks and tests on pallet_balances would fail when the
feature insecure_zero_ed is enabled. This PR allows to run such
benchmark and tests keeping into account the fact that accounts would
not be deleted when their balance goes below a threshold.

## Integration

*In depth notes about how this PR should be integrated by downstream
projects. This part is mandatory, and should be
reviewed by reviewers, if the PR does NOT have the `R0-Silent` label. In
case of a `R0-Silent`, it can be ignored.*

## Review Notes

*In depth notes about the **implementation** details of your PR. This
should be the main guide for reviewers to
understand your approach and effectively review it. If too long, use

[`<details>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/details)*.

*Imagine that someone who is depending on the old code wants to
integrate your new code and the only information that
they get is this section. It helps to include example usage and default
value here, with a `diff` code-block to show
possibly integration.*

*Include your leftover TODOs, if any, here.*

# Checklist

* [x] My PR includes a detailed description as outlined in the
"Description" and its two subsections above.
* [x] My PR follows the [labeling requirements](

https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
) of this project (at minimum one label for `T` required)
* External contributors: ask maintainers to put the right label on your
PR.
* [x] I have made corresponding changes to the documentation (if
applicable)
* [x] I have added tests that prove my fix is effective or that my
feature works (if applicable)

You can remove the "Checklist" section once all have been checked. Thank
you for your contribution!

✄
-----------------------------------------------------------------------------

---------

Co-authored-by: Rodrigo Quelhas <rodrigo_quelhas@outlook.pt>
# Description

Close #7122.

This PR replaces the unmaintained `derivative` dependency with
`derive-where`.

## Integration

This PR doesn't change the public interfaces.

## Review Notes

The `derivative` crate, previously used to derive basic traits for
structs with generics or enums, is no longer actively maintained. It has
been replaced with the `derive-where` crate, which offers a more
straightforward syntax while providing the same features as
`derivative`.

---------

Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>
- added 3 links for subweight comparison - now, ~1 month ago release, ~3
month ago release tag
- added `--3way --ours` flags for `git apply` to resolve potential
conflict
- stick to the weekly branch from the start until the end, to prevent
race condition with conflicts
This PR modifies the fatxpool to use tracing instead of log for logging.

closes #5490

Polkadot address: 12GyGD3QhT4i2JJpNzvMf96sxxBLWymz4RdGCxRH5Rj5agKW

---------

Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
…n to all backing groups (#6924)

## Issues
- [[#5049] Elastic scaling: zombienet
tests](#5049)
- [[#4526] Add zombienet tests for malicious
collators](#4526)

## Description
Modified the undying collator to include a malus mode, in which it
submits the same collation to all assigned backing groups.

## TODO
* [X] Implement malicious collator that submits the same collation to
all backing groups;
* [X] Avoid the core index check in the collation generation subsystem:
https://github.com/paritytech/polkadot-sdk/blob/master/polkadot/node/collation-generation/src/lib.rs#L552-L553;
* [X] Resolve the mismatch between the descriptor and the commitments
core index: #7104
* [X] Implement `duplicate_collations` test with zombienet-sdk;
* [X] Add PRdoc.
This should fix the error log related to PoV pre-dispatch weight being
lower than post-dispatch for `ParasInherent`:
```
ERROR tokio-runtime-worker runtime::frame-support: Post dispatch weight is greater than pre dispatch weight. Pre dispatch weight may underestimating the actual weight. Greater post dispatch weight components are ignored.
                                        Pre dispatch weight: Weight { ref_time: 47793353978, proof_size: 1019 },
                                        Post dispatch weight: Weight { ref_time: 5030321719, proof_size: 135395 }
```

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
This PR backports regular version bumps and prdoc reorganization from
stable release branch back to master
# Description

There is a small error (which slipped through reviews) in matrix
strategy expansion which results in errors like this:
https://github.com/paritytech/polkadot-sdk/actions/runs/13079943579/job/36501002368.

## Integration

N/A

## Review Notes

Need to fix this in master and then rerun it manually against
`stable2412-1`.

Signed-off-by: Iulian Barbu <iulian.barbu@parity.io>
Part of #5079.

Removes all usage of the static async backing params, replacing them
with dynamically computed equivalent values (based on the claim queue
and scheduling lookahead).

Adds a new runtime API for querying the scheduling lookahead value. If
not present, falls back to 3 (the default value that is backwards
compatible with values we have on production networks for
allowed_ancestry_len)

Also resolves most of
#4447, removing code
that handles async backing not yet being enabled.
While doing this, I removed the support for collation protocol version 1
on collators, as it only worked for leaves not supporting async backing
(which are none).
I also unhooked the legacy v1 statement-distribution (for the same
reason as above). That subsystem is basically dead code now, so I had to
remove some of its tests as they would no longer pass (since the
subsystem no longer sends messages to the legacy variant). I did not
remove the entire legacy subsystem yet, as that would pollute this PR
too much. We can remove the entire v1 and v2 validation protocols in a
follow up PR.

In another PR: remove test files with names `prospective_parachains`
(it'd pollute this PR if we do now)

TODO:
- [x] add deprecation warnings
- [x] prdoc

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
rockbmb and others added 30 commits February 14, 2025 20:23
# Description

open-web3-stack/polkadot-ecosystem-tests#174
showed the test for the `pallet_staking::chill_other` extrinsic could be
more exhaustive.

This PR adds those checks, and also a few more to another test related
to `chill_other`,
`pallet_staking::tests::change_of_absolute_max_nominations`.

## Integration

N/A

## Review Notes

N/A

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
## Multi Block Election Pallet

This PR adds the first iteration of the multi-block staking pallet. 

From this point onwards, the staking and its election provider pallets
are being customized to work in AssetHub. While usage in solo-chains is
still possible, it is not longer the main focus of this pallet. For a
safer usage, please fork and user an older version of this pallet.

---

## Replaces

- [x] #6034 
- [x] #5272

## Related PRs: 

- [x] #7483
- [ ] #7357
- [ ] #7424
- [ ] paritytech/polkadot-staking-miner#955

This branch can be periodically merged into
#7358 ->
#6996

## TODOs: 

- [x] rebase to master 
- Benchmarking for staking critical path
  - [x] snapshot
  - [x] election result
- Benchmarking for EPMB critical path
  - [x] snapshot
  - [x] verification
  - [x] submission
  - [x] unsigned submission
  - [ ] election results fetching
- [ ] Fix deletion weights. Either of
  - [ ] Garbage collector + lazy removal of all paged storage items
  - [ ] Confirm that deletion is small PoV footprint.
- [ ] Move election prediction to be push based. @tdimitrov 
- [ ] integrity checks for bounds 
- [ ] Properly benchmark this as a part of CI -- for now I will remove
them as they are too slow
- [x] add try-state to all pallets
- [x] Staking to allow genesis dev accounts to be created internally
- [x] Decouple miner config so @niklasad1 can work on the miner
72841b7
- [x] duplicate snapshot page reported by @niklasad1 
- [ ] #6520 or equivalent
-- during snapshot, `VoterList` must be locked
- [ ] Move target snapshot to a separate block

---------

Co-authored-by: Gonçalo Pestana <g6pestana@gmail.com>
Co-authored-by: Ankan <10196091+Ank4n@users.noreply.github.com>
Co-authored-by: command-bot <>
Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>
Co-authored-by: Giuseppe Re <giuseppe.re@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…ation #7297 (#7320)

Solves #7297

I added a ProxyApi runtime API to the Proxy pallet with two methods:

check_permissions: Checks if a RuntimeCall passes a ProxyType's
InstanceFilter.
is_superset: Verifies if one ProxyType includes another.




Polkadot address: 121HJWZtD13GJQPD82oEj3gSeHqsRYm1mFgRALu4L96kfPD1

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Bastian Köcher <info@kchr.de>
This PR adds support for chain properties to `chain-spec-builder`. Now
properties can be specified as such:

```sh
$ chain-spec-builder create -r $RUNTIME_PATH --properties tokenSymbol=DUMMY,tokenDecimals=6,isEthereum=false
```

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Michal Kucharczyk <1728078+michalkucharczyk@users.noreply.github.com>
resolves #7354

Polkadot address: 121HJWZtD13GJQPD82oEj3gSeHqsRYm1mFgRALu4L96kfPD1

---------

Co-authored-by: Guillaume Thiolliere <guillaume.thiolliere@parity.io>
Co-authored-by: Bastian Köcher <git@kchr.de>
Bumps [enumflags2](https://github.com/meithecatte/enumflags2) from 0.7.7
to 0.7.11.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/meithecatte/enumflags2/releases">enumflags2's
releases</a>.</em></p>
<blockquote>
<h2>Release 0.7.10</h2>
<ul>
<li>Fix a case where the <code>#[bitflags]</code> macro would access the
crate through <code>enumflags2::...</code> instead of
<code>::enumflags2::...</code>. This makes the generated code more
robust and avoids triggering the <code>unused_qualifications</code>
lint. (<a
href="https://github.com/meithecatte/enumflags2/issues/58">#58</a>)</li>
<li>Rework the proc-macro to use <code>syn</code> with the
<code>derive</code> feature (as opposed to <code>full</code>). This
reduces the <code>cargo build</code> time for <code>enumflags2</code> by
about 20%.</li>
</ul>
<h2>Release 0.7.9</h2>
<ul>
<li>The <code>BitFlag</code> trait now includes convenience re-exports
for the constructors of <code>BitFlags</code>. This lets you do
<code>MyFlag::from_bits</code> instead
<code>BitFlags::&lt;MyFlag&gt;::from_bits</code> where the type of the
flag cannot be inferred from context (thanks <a
href="https://github.com/ronnodas"><code>@​ronnodas</code></a>).</li>
<li>The documentation now calls out the fact that the implementation of
<code>PartialOrd</code> may not be what you expect (reported by <a
href="https://github.com/ronnodas"><code>@​ronnodas</code></a>).</li>
</ul>
<h2>Release 0.7.8</h2>
<ul>
<li>New API: <code>BitFlags::set</code>. Sets the value of a specific
flag to that of the <code>bool</code> passed as argument. (thanks, <a
href="https://github.com/m4dh0rs3"><code>@​m4dh0rs3</code></a>)</li>
<li><code>BitFlags</code> now implements <code>PartialOrd</code> and
<code>Ord</code>, to make it possible to use it as a key in a
<code>BTreeMap</code>.</li>
<li>The bounds on the implementation of <code>Hash</code> got improved,
so that it is possible to use it in code generic over <code>T:
BitFlag</code>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/meithecatte/enumflags2/commit/cc09d89bc4ef20fbf4c8016a40e160fe47b2d042"><code>cc09d89</code></a>
Release 0.7.11</li>
<li><a
href="https://github.com/meithecatte/enumflags2/commit/24f03afbd0c23adaf0873a941600bd0b3b7ba302"><code>24f03af</code></a>
make_bitflags: Allow omitting { } for singular flags</li>
<li><a
href="https://github.com/meithecatte/enumflags2/commit/754a8de723c54c79b2a8ab6993adc59b478273b0"><code>754a8de</code></a>
Expand some aspects of the documentation</li>
<li><a
href="https://github.com/meithecatte/enumflags2/commit/aec9558136a53a952f39b74a4a0688a31423b815"><code>aec9558</code></a>
Update ui tests for latest nightly</li>
<li><a
href="https://github.com/meithecatte/enumflags2/commit/8205d5ba03ccc9ccb7407693440f8e47f8ceeeb4"><code>8205d5b</code></a>
Release 0.7.10</li>
<li><a
href="https://github.com/meithecatte/enumflags2/commit/1c78f097165436d043f48b9f6183501f84ff965f"><code>1c78f09</code></a>
Run clippy with only the declared syn features</li>
<li><a
href="https://github.com/meithecatte/enumflags2/commit/561fe5eaf7ba6daeb267a41343f6def2a8b86ad7"><code>561fe5e</code></a>
Emit a proper error if bitflags enum is generic</li>
<li><a
href="https://github.com/meithecatte/enumflags2/commit/f3bb174beb27a1d1ef28dcf03fb607a3bb7c6e55"><code>f3bb174</code></a>
Avoid depending on syn's <code>full</code> feature flag</li>
<li><a
href="https://github.com/meithecatte/enumflags2/commit/e01808be0f151ac251121833d3225debd253ca3a"><code>e01808b</code></a>
Always use absolute paths in generated proc macro code</li>
<li><a
href="https://github.com/meithecatte/enumflags2/commit/f08cd33a18511608f4a881e53c4f4c1b951301e0"><code>f08cd33</code></a>
Specify the Rust edition for the whole test package</li>
<li>Additional commits viewable in <a
href="https://github.com/meithecatte/enumflags2/compare/v0.7.7...v0.7.11">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=enumflags2&package-manager=cargo&previous-version=0.7.7&new-version=0.7.11)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
# Utility Call Fallback

This introduces a new extrinsic: **`if_else`**

Which first attempts to dispatch the `main` call(s). If the `main`
call(s) fail, the `fallback` call(s) is dispatched instead. Both calls
are executed with the same origin.

In the event of a fallback failure the whole call fails with the weights
returned.

## Use Case
Some use cases might involve submitting a `batch` type call in either
main, fallback or both.

Resolves #6000

Polkadot Address: 1HbdqutFR8M535LpbLFT41w3j7v9ptEYGEJKmc6PKpqthZ8

---------

Co-authored-by: rainbow-promise <154476501+rainbow-promise@users.noreply.github.com>
Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Closes #4315

---------

Co-authored-by: Guillaume Thiolliere <guillaume.thiolliere@parity.io>
Update to latest version of `frame-metadata` in order to support pallet
view function metadata.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Add a cli option to skip searching receipts for blocks older than the
specified limit

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Returning an iterator in `TracksInfo::tracks()` instead of a static
slice allows for more flexible implementations of `TracksInfo` that can
use the chain storage without compromising a lot on the
performance/memory penalty if we were to return an owned `Vec` instead.

---------

Co-authored-by: Pablo Andrés Dorado Suárez <hola@pablodorado.com>
Preparation for AHM and making stuff public.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Dónal Murray <donal.murray@parity.io>
This PR modifies the libp2p networking-specific log targets for granular
control (e.g., just enabling trace for req-resp).

Previously, all logs were outputted to `sub-libp2p` target, flooding the
log messages on busy validators.

### Changes
- Discover: `sub-libp2p::discovery`
- Notification/behaviour: `sub-libp2p::notification::behaviour`
- Notification/handler: `sub-libp2p::notification::handler`
- Notification/service: `sub-libp2p::notification::service`
- Notification/upgrade: `sub-libp2p::notification::upgrade`
- Request response: `sub-libp2p::request-response`

cc @paritytech/networking

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Dmitry Markin <dmitry@markin.tech>
Add emulated e2e tests for following scenarios:

Exporting native asset to another ecosystem:
- Sending WNDs from Penpal Westend to Penpal Rococo: PPW->AHW->AHR->PPR
- Sending WNDs from Westend Relay to Penpal Rococo: W->AHW->AHR->PPR
   Example: Westend Treasury funds something on Rococo Parachain.

Importing native asset from another ecosystem to its native ecosystem:
- Sending ROCs from Penpal Westend to Penpal Rococo: PPW->AHW->AHR->PPR
- Sending ROCs from Penpal Westend to Rococo Relay: PPW->AHW->AHR->R
   Example: Westend Parachain returns some funds to Rococo Treasury.

Signed-off-by: Adrian Catangiu <adrian@parity.io>
Implements the `web3_clientVersion` method. This is a common requirement
for external Ethereum libraries when querying a client.

Fixes paritytech/contract-issues#26.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
# Description

resolve #6468



# Checklist

* [x] My PR includes a detailed description as outlined in the
"Description" and its two subsections above.
* [x] My PR follows the [labeling requirements](

https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
) of this project (at minimum one label for `T` required)
* External contributors: ask maintainers to put the right label on your
PR.
* [x] I have made corresponding changes to the documentation (if
applicable)
* [x] I have added tests that prove my fix is effective or that my
feature works (if applicable)

---------

Co-authored-by: command-bot <>
…ication (#7424)

closes #3610.

helps #6344, but need
to migrate storage `Offences::Reports` before we can remove exposure
dependency in RC pallets.

replaces #6788.

## Context  
Slashing in staking is unbounded currently, which is a major blocker
until staking can move to a parachain (AH).

### Current Slashing Process (Unbounded)  

1. **Offence Reported**  
- Offences include multiple validators, each with potentially large
exposure pages.
- Slashes are **computed immediately** and scheduled for application
after **28 eras**.

2. **Slash Applied**  
- All unapplied slashes are executed in **one block** at the start of
the **28th era**. This is an **unbounded operation**.


### Proposed Slashing Process (Bounded)  

1. **Offence Queueing**  
   - Offences are **queued** after basic sanity checks.  

2. **Paged Offence Processing (Computing Slash)**  
   - Slashes are **computed one validator exposure page at a time**.  
   - **Unapplied slashes** are stored in a **double map**:  
     - **Key 1 (k1):** `EraIndex`  
- **Key 2 (k2):** `(Validator, SlashFraction, PageIndex)` — a unique
identifier for each slash page

3. **Paged Slash Application**  
- Slashes are **applied one page at a time** across multiple blocks.
- Slash application starts at the **27th era** (one era earlier than
before) to ensure all slashes are applied **before stakers can unbond**
(which starts from era 28 onwards).

---

## Worst-Case Block Calculation for Slash Application  

### Polkadot:  
- **1 era = 24 hours**, **1 block = 6s** → **14,400 blocks/era**  
- On parachains (**12s blocks**) → **7,200 blocks/era**  

### Kusama:  
- **1 era = 6 hours**, **1 block = 6s** → **3,600 blocks/era**  
- On parachains (**12s blocks**) → **1,800 blocks/era**  

### Worst-Case Assumptions:  
- **Total stakers:** 40,000 nominators, 1000 validators. (Polkadot
currently has ~23k nominators and 500 validators)
- **Max slashed:** 50% so 20k nominators, 250 validators.  
- **Page size:** Validators with multiple page: (512 + 1)/2 = 256 ,
Validators with single page: 1

### Calculation:  
There might be a more accurate way to calculate this worst-case number,
and this estimate could be significantly higher than necessary, but it
shouldn’t exceed this value.

Blocks needed: 250 + 20k/256 = ~330 blocks.

##  *Potential Improvement:*  
- Consider adding an **Offchain Worker (OCW)** task to further optimize
slash application in future updates.
- Dynamically batch unapplied slashes based on number of nominators in
the page, or process until reserved weight limit is exhausted.

----
## Summary of Changes  

### Storage  
- **New:**  
  - `OffenceQueue` *(StorageDoubleMap)*  
    - **K1:** Era  
    - **K2:** Offending validator account  
    - **V:** `OffenceRecord`  
  - `OffenceQueueEras` *(StorageValue)*  
    - **V:** `BoundedVec<EraIndex, BoundingDuration>`  
  - `ProcessingOffence` *(StorageValue)*  
    - **V:** `(Era, offending validator account, OffenceRecord)`  

- **Changed:**  
  - `UnappliedSlashes`:  
    - **Old:** `StorageMap<K -> Era, V -> Vec<UnappliedSlash>>`  
- **New:** `StorageDoubleMap<K1 -> Era, K2 -> (validator_acc, perbill,
page_index), V -> UnappliedSlash>`

### Events  
- **New:**  
  - `SlashComputed { offence_era, slash_era, offender, page }`  
  - `SlashCancelled { slash_era, slash_key, payout }`  

### Error  
- **Changed:**  
  - `InvalidSlashIndex` → Renamed to `InvalidSlashRecord`  
- **Removed:**  
  - `NotSortedAndUnique`  
- **Added:**  
  - `EraNotStarted`  

### Call  
- **Changed:**  
  - `cancel_deferred_slash(era, slash_indices: Vec<u32>)`  
    → Now takes `Vec<(validator_acc, slash_fraction, page_index)>`  
- **New:**  
- `apply_slash(slash_era, slash_key: (validator_acc, slash_fraction,
page_index))`

### Runtime Config  
- `FullIdentification` is now set to a unit type (`()`) / null identity,
replacing the previous exposure type for all runtimes using
`pallet_session::historical`.

## TODO
- [x] Fixed broken `CancelDeferredSlashes`.
- [x] Ensure on_offence called only with validator account for
identification everywhere.
- [ ] Ensure we never need to read full exposure.
- [x] Tests for multi block processing and application of slash.
- [x] Migrate UnappliedSlashes 
- [x] Bench (crude, needs proper bench as followup)
  - [x] on_offence()
  - [x] process_offence()
  - [x] apply_slash()
 
 
## Followups (tracker
[link](#7596))
- [ ] OCW task to process offence + apply slashes.
- [ ] Minimum time for governance to cancel deferred slash.
- [ ] Allow root or staking admin to add a custom slash.
- [ ] Test HistoricalSession proof works fine with eras before removing
exposure as full identity.
- [ ] Properly bench offence processing and slashing.
- [ ] Handle Offences::Reports migration when removing validator
exposure as identity.

---------

Co-authored-by: Gonçalo Pestana <g6pestana@gmail.com>
Co-authored-by: command-bot <>
Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>
Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>
Co-authored-by: kianenigma <kian@parity.io>
Co-authored-by: Giuseppe Re <giuseppe.re@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Recreation of #7357 on
top of master. The old PR messes up the git history too much so I am
recreating it from scratch.

This PR is work in progress. It's purpose is to commit initial structure
of `pallet-staking-ah-client` and `pallet-staking-rc-client` to master.
The changes will be polished by a follow up PRs which will be
backported.

Related issues: #6167
and #6166

This PR introduces the initial structure for `pallet-ah-client` and
`pallet-rc-client`. These pallets will reside on the relay chain and
AssetHub, respectively, and will manage the interaction between
`pallet-session` on the relay chain and `pallet-staking` on AssetHub.

Both pallets are experimental and not intended for production use.

TODOs:
- [ ] Probably handle
#6344 here.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Ankan <10196091+Ank4n@users.noreply.github.com>
closes #6508.

## TODO
- [x] Migrate storage `DisabledValidators` both in pallet-session and
pallet-staking.
- [ ] Test that disabled validator resets at era change.
- [ ] Add always sorted try-runtime test for `DisabledValidators`.
- [ ] More test coverage for the disabling logic.

---------

Co-authored-by: Gonçalo Pestana <g6pestana@gmail.com>
Co-authored-by: command-bot <>
Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>
Co-authored-by: Guillaume Thiolliere <gui.thiolliere@gmail.com>
Co-authored-by: kianenigma <kian@parity.io>
Co-authored-by: Giuseppe Re <giuseppe.re@parity.io>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…#7543)

Some prdoc are invalid, `prdoc check` is failing for them. This also
broke usage of parity-publish.

This PR fixes the invalid prdoc, and add a ci job to check the prdoc are
valid. I don't think the check is unstable considering it is a simple
yaml check, so I put the job as required.

---------

Co-authored-by: Alexander Samusev <41779041+alvicsam@users.noreply.github.com>
…#7511)

Closes #7452 

Adds new test for omni node on dev mode working correctly with
dev_chain_spec.json

@skunert

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Moving exec tests into a new file

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Alexander Theißen <alex.theissen@me.com>
Add a new extrinsic `dispatch_as_fallible`.

It's almost the same as [`Pallet::dispatch_as`] but forwards any error
of the inner call.

Closes #219.

And add more unit tests to cover `dispatch_as` and
`dispatch_as_fallible`.

---

Polkadot address: 156HGo9setPcU2qhFMVWLkcmtCEGySLwNqa3DaEiYSWtte4Y

---------

Signed-off-by: Xavier Lau <x@acg.box>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Bastian Köcher <info@kchr.de>
Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Fix issue where setting the `remote_fees` field of `InitiateTransfer` to
`None` could lead to unintended bypassing of fees in certain conditions.

Changes made to fix this:
- `remote_fees: None` now results in the `UnpaidExecution` instruction
being appended *after* the origin altering instruction, be it
`AliasOrigin` or `ClearOrigin`. This means `preserve_origin: true` must
be set if you want to have any chance of not paying for fees.
- The `AliasOrigin` instruction is not appended if the executor is
called with the root location (`Here`) since it would alias to itself.
Although this self-aliasing could be done, it needs the ecosystem to add
a new aliasing instruction, so we just skip it.
- Tweaked the `AllowExplicitUnpaidExecutionFrom` barrier to allow
receiving assets (via teleport or reserve asset transfer) and altering
the origin before actually using `UnpaidExecution`. This is to allow
unpaid teleports to work with `InitiateTransfer`.
- For this, the barrier now executes origin altering instructions and
keeps track of the modified origin. It then checks if this final origin
has enough permissions to not pay for fees. In order to follow the
`AliasOrigin` instruction it now takes a new generic `Aliasers` that
should be set to the XCM config item of the same name. This new generic
has a default value of `()`, effectively disallowing the use of
`AliasOrigin`.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Adrian Catangiu <adrian@parity.io>
PR adds a resuable workflow that prevents CI to run on draft PRs. To run
CI on draft a new label is introduced: `A5-run-CI`. To run the CI the
label should be added and en empty commit should be pushed.


close paritytech/ci_cd#1099
close #7168
Related to #7360

This PR implements `DecodeWithMemTracking` for the types in the frame
pallets

The PR is verbose, but it's very simple. `DecodeWithMemTracking` is
simply derived for most of the types. There are only 3 exceptions which
are isolated into 2 separate commits.
One more attempt to fix the link-checker job.

This time we exclude polymesh which is reachable but not via lychee.
Also exclude rust strings that contain "{}" or "{:?}".
…7549)

Closes: #7291 

## Description

*Removes obsolete relayer CLI args from bridges and resolves following
issue: (#7291

---------

Co-authored-by: Branislav Kontur <bkontur@gmail.com>
This PR contains a tiny fix for the release branch-off pipeline, so that
node version bump works again.
This PR adds implementation of `Ord, Eq, PartialOrd, PartialEq` traits
for [`HashAndNumber`
](https://github.com/paritytech/polkadot-sdk/blob/6e645915639ee0bf682de06a0306a4baf712c1d2/substrate/primitives/blockchain/src/header_metadata.rs#L149-L154)
struct.

---------

Co-authored-by: cmd[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.