Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry approval on availability failure if the check is still needed #6807

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

alexggh
Copy link
Contributor

@alexggh alexggh commented Dec 9, 2024

Recovering the POV can fail in situation where the node just restart and the DHT topology wasn't fully discovered yet, so the current node can't connect to most of its Peers. This is bad because for gossiping the assignment you need to be connected to just a few peers, so because we can't approve the candidate and other nodes will see this as a no show.

This becomes bad in the scenario where you've got a lot of nodes restarting at the same time, so you end up having a lot of no-shows in the network that are never covered, in that case it makes sense for nodes to actually retry approving the candidate at a later data in time and retry several times if the block containing the candidate wasn't approved.

TODO

  • Add a subsystem test.

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
@alexggh alexggh requested review from sandreim, ordian, eskimor and AndreiEres and removed request for sandreim December 9, 2024 16:27
@Polkadot-Forum
Copy link

This pull request has been mentioned on Polkadot Forum. There might be relevant details there:

https://forum.polkadot.network/t/2025-11-25-kusama-parachains-spammening-aftermath/11108/1

@burdges
Copy link

burdges commented Dec 11, 2024

Aside: JAM puts validators' IP etc on-chain. We could figure out if on-chain is better? Ideally we'd want on-chain to equivocation defesnes anyways.

Copy link
Contributor

@AndreiEres AndreiEres left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this somehow interfere in normal mode?

polkadot/node/core/approval-voting/src/lib.rs Show resolved Hide resolved
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
@alexggh alexggh force-pushed the alexggh/approval_voting_retry branch from 5835565 to 1346e76 Compare December 20, 2024 15:22
Copy link
Contributor

@alindima alindima left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ordian
Copy link
Member

ordian commented Dec 23, 2024

Alternatively, we could change the

to try connect if we are disconnected. This doesn't have a more granular retry functionality, but a much simpler change.

@alindima
Copy link
Contributor

Alternatively, we could change the

to try connect if we are disconnected. This doesn't have a more granular retry functionality, but a much simpler change.

We could do this, but I doubt this will make a difference. If the strategy of fetching from backers fails, we fall back to chunk fetching which uses IfDisconnected::TryConnect.

Alex is saying that:

current node can't connect to most of its Peers.

If it were just an issue of not trying to connect, then the chunk recovery should have worked

@@ -493,6 +501,8 @@ impl ApprovalVotingSubsystem {
metrics,
Arc::new(SystemClock {}),
spawner,
MAX_APPROVAL_RETRIES,
APPROVAL_CHECKING_TIMEOUT / 2,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at this constant, I can see it was introduced in paritytech/polkadot#3306, but don't recall what was the purpose of it and of approval_outcome (apart from Approved) which we don't seem to even log.

What is the rationale for setting the retry_backoff to 1 minute?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can't be higher than APPROVAL_CHECKING_TIMEOUT because of this:

match work.timeout(APPROVAL_CHECKING_TIMEOUT).await {
.

What is the rationale for setting the retry_backoff to 1 minute?

1 minute seem to satisfy the following requirements for me, it is rare enough to not create too much load and give time for block to be finalized because of no-shows cover-up, it is small enough to actually help with the finality in case the no-show from this particular node is the one holding the approval.

@ordian
Copy link
Member

ordian commented Jan 6, 2025

Aside: JAM puts validators' IP etc on-chain. We could figure out if on-chain is better? Ideally we'd want on-chain to equivocation defesnes anyways.

Agree, we should explore this. This would also solve the problem with restarts and empty DHT cache.

Copy link
Member

@ordian ordian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving it, but it feels like a workaround for an unreliable/slow authority discovery, which we should fix and then remove this workaround.

@alexggh
Copy link
Contributor Author

alexggh commented Jan 6, 2025

Approving it, but it feels like a workaround for an unreliable/slow authority discovery, which we should fix and then remove this workaround.

I did discussed with @lexnv about speeding authority-discovery by saving the cache on disk to have it available on restart, however even with a speedy authority discovery, I still think this would be a good thing to have for robustness, because it is a fallback when networking calls fail from various other reasons, network calls are expected to fail every now and then.

Copy link
Contributor

@sandreim sandreim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! @ordian proposed an alternative solution which should live in AD, but I agree that this fix adds some robustness on top of AD.

From the back of my head, a better place to implement PoV retry would is in the availability recovery subsystem. This would be superior in terms of latency (it's push vs pull/poll) as this subsystem could keep track of all these PoVs that failed to be retrieved and retry immediately to fetch chunks as soon as peers connect (or even more complex strategies - like speculative availability).

The only change needed in approval voting would be to notify av recovery that it's no longer interested in some PoV.

WDYT ?

polkadot/node/core/approval-voting/src/lib.rs Outdated Show resolved Hide resolved
core_index,
session_index,
attempts_remaining: retry.attempts_remaining - 1,
backoff: retry.backoff,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As time passes the chances of connecting to enough peers increases, wouldn't it make sense to decrease the back-off as the retry count increases ? This would help approve candidate faster.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reconnecting to peers is in the oder of minutes, so we wouldn't gain much by reducing the backoff, also usually with backoffs you actually want to increase it as the number of attempts increase because you don't want to end up in a situation where you many failed attempts start stampeding, which makes things worse, however we can't increase it here because of this: #6807 (comment), so I think 1min is an acceptable compromise.

@alindima
Copy link
Contributor

alindima commented Jan 7, 2025

From the back of my head, a better place to implement PoV retry would is in the availability recovery subsystem. This would be superior in terms of latency (it's push vs pull/poll) as this subsystem could keep track of all these PoVs that failed to be retrieved and retry immediately to fetch chunks as soon as peers connect (or even more complex strategies - like speculative availability).

The only change needed in approval voting would be to notify av recovery that it's no longer interested in some PoV.

I don't think this is particularly better. Latency of transmitting messages between subsystems should be negligible.
av-recovery is being used by multiple actors: dispute-coordinator, approval-voting and collator pov-recovery. I doubt that they all need the same amount of retries and in the same scenarios. In addition, I'd like to keep this separation of concerns that we have now. av-recovery is tasked with recovering the available data. If it's not possible, higher level code decides what needs to be done

@sandreim
Copy link
Contributor

sandreim commented Jan 7, 2025

I don't think this is particularly better. Latency of transmitting messages between subsystems should be negligible. av-recovery is being used by multiple actors: dispute-coordinator, approval-voting and collator pov-recovery.

Yes, but why is this a bad thing ? If PoV recovery fails dispute-coordinator currently gives up and it would be more robust to use a retry mechanism.

I doubt that they all need the same amount of retries and in the same scenarios. In addition, I'd like to keep this separation of concerns that we have now. av-recovery is tasked with recovering the available data.

My proposal doesn't change the concerns of av-recovery, but makes it more robust. It should be easy to pass the retry params in the RecoverAvailableData message.

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
@alexggh
Copy link
Contributor Author

alexggh commented Jan 7, 2025

From the back of my head, a better place to implement PoV retry would is in the availability recovery subsystem. This would be superior in terms of latency (it's push vs pull/poll) as this subsystem could keep track of all these PoVs that failed to be retrieved and retry immediately to fetch chunks as soon as peers connect (or even more complex strategies - like speculative availability).

The only change needed in approval voting would be to notify av recovery that it's no longer interested in some PoV.

I don't think this is particularly better. Latency of transmitting messages between subsystems should be negligible. av-recovery is being used by multiple actors: dispute-coordinator, approval-voting and collator pov-recovery. I doubt that they all need the same amount of retries and in the same scenarios. In addition, I'd like to keep this separation of concerns that we have now. av-recovery is tasked with recovering the available data. If it's not possible, higher level code decides what needs to be done

Alright, I looked a bit on this and moving it in the availability-recovery wouldn't be a straight-forward task, the reason for this is because the implementers of RecoveryStrategy are stateful, consumed and can't be cloned, so you can't simply re-run the strategies from here:

while let Some(current_strategy) = self.strategies.pop_front() {
, so you would end-up having to propagate the failure here:
output = state.ongoing_recoveries.select_next_some() => {
and having to re-run the logic for building the recovery-strategy from there, that means that you also have to carry along the parameters needed for rebuilding the strategies array.

I don't think that would be better and easier and less risky to understand and implement than the current proposed approach, but it would give us the benefit that disputes could also opt-in to use it.

Given the current PR improves the situation, I would be inclined to keep the current approach rather than invest in getting this knob in availability-recovery, let me know what you think!

@paritytech-workflow-stopper
Copy link

All GitHub workflows were cancelled due to failure one of the required jobs.
Failed workflow url: https://github.com/paritytech/polkadot-sdk/actions/runs/12654933804
Failed job name: cargo-clippy

@sandreim
Copy link
Contributor

sandreim commented Jan 7, 2025

Alright, I looked a bit on this and moving it in the availability-recovery wouldn't be a straight-forward task, the reason for this is because the implementers of RecoveryStrategy are stateful, consumed and can't be cloned, so you can't simply re-run the strategies from here:

while let Some(current_strategy) = self.strategies.pop_front() {

, so you would end-up having to propagate the failure here:

output = state.ongoing_recoveries.select_next_some() => {

and having to re-run the logic for building the recovery-strategy from there, that means that you also have to carry along the parameters needed for rebuilding the strategies array.
I don't think that would be better and easier and less risky to understand and implement than the current proposed approach, but it would give us the benefit that disputes could also opt-in to use it.

Given the current PR improves the situation, I would be inclined to keep the current approach rather than invest in getting this knob in availability-recovery, let me know what you think!

The only major downside of the current approach is that for each retry we re-fetch the same chunks from the previous attempt, so worse case scenario - we'd be roughly fetching the PoV 16 times, which increases the bandwidth cost.

@alindima
Copy link
Contributor

alindima commented Jan 8, 2025

From the back of my head, a better place to implement PoV retry would is in the availability recovery subsystem. This would be superior in terms of latency (it's push vs pull/poll) as this subsystem could keep track of all these PoVs that failed to be retrieved and retry immediately to fetch chunks as soon as peers connect (or even more complex strategies - like speculative availability).

The only change needed in approval voting would be to notify av recovery that it's no longer interested in some PoV.

I don't think this is particularly better. Latency of transmitting messages between subsystems should be negligible. av-recovery is being used by multiple actors: dispute-coordinator, approval-voting and collator pov-recovery. I doubt that they all need the same amount of retries and in the same scenarios. In addition, I'd like to keep this separation of concerns that we have now. av-recovery is tasked with recovering the available data. If it's not possible, higher level code decides what needs to be done

Alright, I looked a bit on this and moving it in the availability-recovery wouldn't be a straight-forward task, the reason for this is because the implementers of RecoveryStrategy are stateful, consumed and can't be cloned, so you can't simply re-run the strategies from here:

while let Some(current_strategy) = self.strategies.pop_front() {

, so you would end-up having to propagate the failure here:

output = state.ongoing_recoveries.select_next_some() => {

and having to re-run the logic for building the recovery-strategy from there, that means that you also have to carry along the parameters needed for rebuilding the strategies array.
I don't think that would be better and easier and less risky to understand and implement than the current proposed approach, but it would give us the benefit that disputes could also opt-in to use it.

Given the current PR improves the situation, I would be inclined to keep the current approach rather than invest in getting this knob in availability-recovery, let me know what you think!

We could modify the RecoveryTask to retry the last strategy if it fails with an Unavailable error, up to a configurable X amount of retries. Or just chain those X extra strategies on the strategies: VecDeque<Box<dyn RecoveryStrategy<Sender>>>, that's part of the task.
The only param that needs to be recreated for the FetchChunks strategy is the number of validators.

I'm not sure however if this would end up being easier to implement/reason about. Btw, we're doing a similar thing in collators for pov-recovery:

if self.candidates_in_retry.insert(block_hash) {
We only retry once here and with no extra sleeps. It could indeed help us deduplicating this code and has the benefit that Andrei mentioned (that already fetched chunks wouldn't be re-requested), but I'm personally also fine with logging an issue about this and handling it as a refactor if we get the time and it indeed turns out that the code is nice enough

@alexggh
Copy link
Contributor Author

alexggh commented Jan 8, 2025

Alright, I looked a bit on this and moving it in the availability-recovery wouldn't be a straight-forward task, the reason for this is because the implementers of RecoveryStrategy are stateful, consumed and can't be cloned, so you can't simply re-run the strategies from here:

Another reason, why we can't isolate this logic just in availability-recovery is this

match work.timeout(APPROVAL_CHECKING_TIMEOUT).await {
, which would make the future calling into availability-recovery timeout after only 2 minutes, so further refactoring in approval-voting would also be needed to use that.

It could indeed help us deduplicating this code and has the benefit that Andrei mentioned (that already fetched chunks wouldn't be re-requested), but I'm personally also fine with logging an issue about this and handling it as a refactor if we get the time and it indeed turns out that the code is nice enough

Given, that I would want to have this retry in approval-voting rather sooner than later, I'm also in favour of logging an issue for this generic mechanism and merge the PR, considering the downsides are not that critical in my opinion.

Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
@sandreim
Copy link
Contributor

sandreim commented Jan 8, 2025

It could indeed help us deduplicating this code and has the benefit that Andrei mentioned (that already fetched chunks wouldn't be re-requested), but I'm personally also fine with logging an issue about this and handling it as a refactor if we get the time and it indeed turns out that the code is nice enough

Given, that I would want to have this retry in approval-voting rather sooner than later, I'm also in favour of logging an issue for this generic mechanism and merge the PR, considering the downsides are not that critical in my opinion.

Sounds good, let's go 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants