Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from paritytech:master #2

Merged
merged 164 commits into from
Nov 28, 2023
Merged

Conversation

pull[bot]
Copy link

@pull pull bot commented Nov 3, 2023

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

…ge (#2139)

Right now governance could only control byte-fee component of Rococo <>
Westend message fees (paid at Asset Hubs). This PR changes it a bit:
1) governance now allowed to control both fee components - byte fee and
base fee;
2) base fee now includes cost of "default" delivery and confirmation
transactions, in addition to `ExportMessage` instruction cost.
@pull pull bot added the ⤵️ pull label Nov 3, 2023
Bullrich and others added 28 commits November 3, 2023 13:43
Added if condition on review-bot's trigger so it does not trigger in
`draft` PRs.
The check_hardware functions does not give us too much information as to
what is failing, so let's return the list of failed metrics, so that callers can print 
it.

This would make debugging easier, rather than try to guess which
dimension is actually failing.

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
# Description

Update the bootnode of kusama parachains before decommissioning the
nodes. This will avoid connecting to non-existing bootnodes.
This changes `BlockCollection` logic so we don't download block ranges
from peers with which we have these ranges already in sync.

Improves situation with
#1915.
fixes #182

This PR adds a document with recommendations of how deprecations should
be handled. Initiated within FRAME, this checklist could be extended to
the rest of the repo.

I want to quote here a comment from @kianenigma that summarizes the
spirit of this new document:
> I would see it as a guideline of "what an extensive deprecation
process looks like". As the author of a PR, you should match this
against your "common sense" and see if it is needed or not. Someone else
can nudge you to "hey, this is an important PR, you should go through
the deprecation process".
> 
> For some trivial things, all the steps might be an overkill.

---------

Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
The `BlockBuilderProvider` was a trait that was defined in
`sc-block-builder`. The trait was implemented for `Client`. This
basically meant that you needed to import `sc-block-builder` any way to
have access to the block builder. So, this trait was not providing any
real value. This pull request is removing the said trait. Instead of the
trait it introduces a builder for creating a `BlockBuilder`. The builder
currently has the quite fabulous name `BlockBuilderBuilder` (I'm open to
any better name 😅). The rest of the pull request is about
replacing the old trait with the new builder.

# Downstream code changes

If you used `new_block` or `new_block_at` before you now need to switch
it over to the new `BlockBuilderBuilder` pattern:

```rust
// `new` requires a type that implements `CallApiAt`. 
let mut block_builder = BlockBuilderBuilder::new(client)
                // Then you need to specify the hash of the parent block the block will be build on top of
		.on_parent_block(at)
                // The block builder also needs the block number of the parent block. 
                // Here it is fetched from the given `client` using the `HeaderBackend`
                // However, there also exists `with_parent_block_number` for directly passing the number
		.fetch_parent_block_number(client)
		.unwrap()
                // Enable proof recording if required. This call is optional.
		.enable_proof_recording()
                // Pass the digests. This call is optional.
                .with_inherent_digests(digests)
		.build()
		.expect("Creates new block builder");
```

---------

Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
Co-authored-by: command-bot <>
This PR is a follow up to #1661 

- [x] rename the `simple` module to `legacy`
- [x] fix benchmarks to disregard the number of additional fields
- [x] change the storage deposits to charge per encoded byte of the
identity information instance, removing the need for `fn
additional(&self) -> usize` in `IdentityInformationProvider`
- [x] ~add an extrinsic to rejig deposits to account for the change
above~
- [ ] ~ensure through proper configuration that the new byte-based
deposit is always lower than whatever is reserved now~
- [x] remove `IdentityFields` from the `set_fields` extrinsic signature,
as per [this
discussion](#1661 (comment))

> ensure through proper configuration that the new byte-based deposit is
always lower than whatever is reserved now

Not sure this is needed anymore. If the new deposits are higher than
what is currently on chain and users don't have enough funds to reserve
what is needed, the extrinisc fails and they're basically grandfathered
and frozen until they add more funds and/or make a change to their
identity. This behavior seems fine to me. Original idea
[here](#1661 (comment)).

> add an extrinsic to rejig deposits to account for the change above

This was initially implemented but now removed from this PR in favor of
the implementation detailed
[here](#2088).

---------

Signed-off-by: georgepisaltu <george.pisaltu@parity.io>
Co-authored-by: joepetrowski <joe@parity.io>
This PR removes the `GenesisExt` wrapper over the `GenesisRuntimeConfig`
in `cumulus-test-service`. Initialization of values that were performed
by `GenesisExt::BuildStorage` was moved into `test_pallet` genesis.

---------

Co-authored-by: command-bot <>
Co-authored-by: Bastian Köcher <git@kchr.de>
closes #2020.

This improves running time for pallet-bags-list try runtime checks on
westend from ~90 minutes to 6 seconds on M2 pro.
Should help #234.
Related to #2020 and
#2108.

Refactors and improves running time for try runtime checks for staking
pallet.

Tested on westend on my M2 pro: running time drops from 90 seconds to 7
seconds.
# Description

Update the bootnode of kusama parachains before decommissioning the
nodes. This will avoid connecting to non-existing bootnodes.
This PR prepares chains specs for _native-runtime-free_  world.

This PR has following changes:
- `substrate`:
  - adds support for:
- JSON based `GenesisConfig` to `ChainSpec` allowing interaction with
runtime `GenesisBuilder` API.
- interacting with arbitrary runtime wasm blob to[
`chain-spec-builder`](https://github.com/paritytech/substrate/blob/3ef576eaeb3f42610e85daecc464961cf1295570/bin/utils/chain-spec-builder/src/lib.rs#L46)
command line util,
- removes
[`code`](https://github.com/paritytech/substrate/blob/3ef576eaeb3f42610e85daecc464961cf1295570/frame/system/src/lib.rs#L660)
from `system_pallet`
  - adds `code` to the `ChainSpec`
- deprecates
[`ChainSpec::from_genesis`](https://github.com/paritytech/substrate/blob/3ef576eaeb3f42610e85daecc464961cf1295570/client/chain-spec/src/chain_spec.rs#L263),
but also changes the signature of this method extending it with `code`
argument.
[`ChainSpec::builder()`](https://github.com/paritytech/substrate/blob/20bee680ed098be7239cf7a6b804cd4de267983e/client/chain-spec/src/chain_spec.rs#L507)
should be used instead.
- `polkadot`:
- all references to `RuntimeGenesisConfig` in `node/service` are
removed,
- all
`(kusama|polkadot|versi|rococo|wococo)_(staging|dev)_genesis_config`
functions now return the JSON patch for default runtime `GenesisConfig`,
  - `ChainSpecBuilder` is used, `ChainSpec::from_genesis` is removed,

- `cumulus`:
  - `ChainSpecBuilder` is used, `ChainSpec::from_genesis` is removed,
- _JSON_ patch configuration used instead of `RuntimeGenesisConfig
struct` in all chain specs.
  
---------

Co-authored-by: command-bot <>
Co-authored-by: Javier Viola <javier@parity.io>
Co-authored-by: Davide Galassi <davxy@datawok.net>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Co-authored-by: Kevin Krone <kevin@parity.io>
Co-authored-by: Bastian Köcher <git@kchr.de>
Related #2013

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Otherwise the return code is not correctly propagated (ref
ggwpez/zepter#48).

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Move peer banning from `ChainSync` to `SyncingEngine`.
This PR updates the version of `serde_json` to `1.0.108` throughout the
codebase.
... see #2138 for why
is not good, until we fix it let's add a warning to understand if this
is happening in the wild.

---------

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: Bastian Köcher <git@kchr.de>
This PR exposes a `force_remove_vesting` through a ROOT call. 
See linked
[issue](#269)

---------

Co-authored-by: georgepisaltu <52418509+georgepisaltu@users.noreply.github.com>
Co-authored-by: command-bot <>
Co-authored-by: Dónal Murray <donal.murray@parity.io>
A quick fix where a benchmark test was wrongly renamed in this PR
#1868
…certificate (#1178)

**_PR migrated from https://github.com/paritytech/polkadot/pull/6782_** 

This PR will upgrade the network protocol to version 3 -> VStaging which
will later be renamed to V3. This version introduces a new kind of
assignment certificate that will be used for tranche0 assignments.
Instead of issuing/importing one tranche0 assignment per candidate,
there will be just one certificate per relay chain block per validator.
However, we will not be sending out the new assignment certificates,
yet. So everything should work exactly as before. Once the majority of
the validators have been upgraded to the new protocol version we will
enable the new certificates (starting at a specific relay chain block)
with a new client update.

There are still a few things that need to be done:

- [x] Use bitfield instead of Vec<CandidateIndex>:
paritytech/polkadot#6802
  - [x] Fix existing approval-distribution and approval-voting tests
  - [x] Fix bitfield-distribution and statement-distribution tests
  - [x] Fix network bridge tests
  - [x] Implement todos in the code
  - [x] Add tests to cover new code
  - [x] Update metrics
  - [x] Remove the approval distribution aggression levels: TBD PR
  - [x] Parachains DB migration 
  - [x] Test network protocol upgrade on Versi
  - [x] Versi Load test
  - [x] Add Zombienet test
  - [x] Documentation updates
- [x] Fix for sending DistributeAssignment for each candidate claimed by
a v2 assignment (warning: Importing locally an already known assignment)
 - [x]  Fix AcceptedDuplicate
 - [x] Fix DB migration so that we can still keep old data.
 - [x] Final Versi burn in

---------

Signed-off-by: Andrei Sandu <andrei-mihail@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Co-authored-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
…2179)

availability-distribution subsystem is not sending availability-recovery
messages. Update the overseer declaration to reflect this
### This PR is a port of this [PR for
substrate](paritytech/substrate#13013) by
@kianenigma

Add infrastructure needed to have a Pallet::decode_entire_state(), which
makes sure all "typed" storage items defined in the pallet are
decode-able.

This is not enforced in any way at the moment. Teams who wish to
integrate/use this in the try-runtime feature flag should add
frame_support::storage::migration::EnsureStateDecodes as the LAST ITEM
of the runtime's custom migrations, and pass it to frame-executive. This
will make it usable in try-runtime on-runtime-upgrade.

This now catches cases like
#1969:
```pre
ERROR runtime::executive] failed to decode the value at key: Failed to decode value at key: 0x94eadf0156a8ad5156507773d0471e4ab8ebad86f546c7e0b135a4212aace339. Storage info StorageInfo { pallet_name: Ok("ParaScheduler"), storage_name: Ok("AvailabilityCores"), prefix: Err(Utf8Error { valid_up_to: 0, error_len: Some(1) }), max_values: Some(1), max_size: None }. Raw value: Some("0x0c010101010101")
```

... or:

![image](https://github.com/paritytech/polkadot-sdk/assets/10380170/73052d4f-4da5-4b21-a8dd-b17004e5965e)

Closes #241

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Liam Aharon <liam.aharon@hotmail.com>
…1297)

Original PR paritytech/substrate#14641

---

Closes #109

### Problem
Quoting from the above issue:

> When adding a pallet to chain after genesis we currently don't set the
StorageVersion. So, when calling on_chain_storage_version it returns 0
while the pallet is maybe already at storage version 9 when it was added
to the chain. This could lead to issues when running migrations.

### Solution

- Create a new trait `BeforeAllRuntimeMigrations` with a single method
`fn before_all_runtime_migrations() -> Weight` trait with a noop default
implementation
- Modify `Executive` to call
`BeforeAllRuntimeMigrations::before_all_runtime_migrations` for all
pallets before running any other hooks
- Implement `BeforeAllRuntimeMigrations` in the pallet proc macro to
initialize the on-chain version to the current pallet version if the
pallet has no storage set (indicating it has been recently added to the
runtime and needs to have its version initialised).

### Other changes in this PR

- Abstracted repeated boilerplate to access the `pallet_name` in the
pallet expand proc macro.

### FAQ

#### Why create a new hook instead of adding this logic to the pallet
`pre_upgrade`?

`Executive` currently runs `COnRuntimeUpgrade` (custom migrations)
before `AllPalletsWithSystem` migrations. We need versions to be
initialized before the `COnRuntimeUpgrade` migrations are run, because
`COnRuntimeUpgrade` migrations may use the on-chain version for critical
logic. e.g. `VersionedRuntimeUpgrade` uses it to decide whether or not
to execute.

We cannot reorder `COnRuntimeUpgrade` and `AllPalletsWithSystem` so
`AllPalletsWithSystem` runs first, because `AllPalletsWithSystem` have
some logic in their `post_upgrade` hooks to verify that the on-chain
version and current pallet version match. A common use case of
`COnRuntimeUpgrade` migrations is to perform a migration which will
result in the versions matching, so if they were reordered these
`post_upgrade` checks would fail.

#### Why init the on-chain version for pallets without a current storage
version?

We must init the on-chain version for pallets even if they don't have a
defined storage version so if there is a future version bump, the
on-chain version is not automatically set to that new version without a
proper migration.

e.g. bad scenario:

1. A pallet with no 'current version' is added to the runtime
2. Later, the pallet is upgraded with the 'current version' getting set
to 1 and a migration is added to Executive Migrations to migrate the
storage from 0 to 1
    a. Runtime upgrade occurs
    b. `before_all` hook initializes the on-chain version to 1
c. `on_runtime_upgrade` of the migration executes, and sees the on-chain
version is already 1 therefore think storage is already migrated and
does not execute the storage migration
Now, on-chain version is 1 but storage is still at version 0.

By always initializing the on-chain version when the pallet is added to
the runtime we avoid that scenario.

---------

Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
…ml (#2191)

There was a race in merging between
#1256 and
#1178, so this newly
added tests wasn't updated with the new path for the configuration, so
fix that.

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Part of #2186

The only usage of pallet-asset-rate is guarded by `runtime-benchmarks`
feature. I don't want ORML to be forced to include this pallet in deps
for no good reason.
smohan-dw pushed a commit that referenced this pull request Feb 17, 2024
1. Benchmark results are collected in a single struct.
2. The output of the results is prettified.
3. The result struct used to save the output as a yaml and store it in
artifacts in a CI job.

```
$ cargo run -p polkadot-subsystem-bench --release -- test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt
$ cat output.txt

polkadot/node/subsystem-bench/examples/availability_read.yaml #1

Network usage, KiB                     total   per block
Received from peers               510796.000  170265.333
Sent to peers                        221.000      73.667

CPU usage, s                           total   per block
availability-recovery                 38.671      12.890
Test environment                       0.255       0.085


polkadot/node/subsystem-bench/examples/availability_read.yaml #2

Network usage, KiB                     total   per block
Received from peers               413633.000  137877.667
Sent to peers                        353.000     117.667

CPU usage, s                           total   per block
availability-recovery                 52.630      17.543
Test environment                       0.271       0.090


polkadot/node/subsystem-bench/examples/availability_read.yaml #3

Network usage, KiB                     total   per block
Received from peers               424379.000  141459.667
Sent to peers                        703.000     234.333

CPU usage, s                           total   per block
availability-recovery                 51.128      17.043
Test environment                       0.502       0.167

```

```
$ cargo run -p polkadot-subsystem-bench --release -- --ci test-sequence --path polkadot/node/subsystem-bench/examples/availability_read.yaml | tee output.txt
$ cat output.txt
- benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #1'
  network:
  - resource: Received from peers
    total: 509011.0
    per_block: 169670.33333333334
  - resource: Sent to peers
    total: 220.0
    per_block: 73.33333333333333
  cpu:
  - resource: availability-recovery
    total: 31.845848445
    per_block: 10.615282815
  - resource: Test environment
    total: 0.23582828799999941
    per_block: 0.07860942933333313

- benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #2'
  network:
  - resource: Received from peers
    total: 411738.0
    per_block: 137246.0
  - resource: Sent to peers
    total: 351.0
    per_block: 117.0
  cpu:
  - resource: availability-recovery
    total: 18.93596025099999
    per_block: 6.31198675033333
  - resource: Test environment
    total: 0.2541994199999979
    per_block: 0.0847331399999993

- benchmark_name: 'polkadot/node/subsystem-bench/examples/availability_read.yaml #3'
  network:
  - resource: Received from peers
    total: 424548.0
    per_block: 141516.0
  - resource: Sent to peers
    total: 703.0
    per_block: 234.33333333333334
  cpu:
  - resource: availability-recovery
    total: 16.54178526900001
    per_block: 5.513928423000003
  - resource: Test environment
    total: 0.43960946299999537
    per_block: 0.14653648766666513
```

---------

Co-authored-by: Andrei Sandu <54316454+sandreim@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.