-
Notifications
You must be signed in to change notification settings - Fork 767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retry approval on availability failure if the check is still needed #6807
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
This pull request has been mentioned on Polkadot Forum. There might be relevant details there: https://forum.polkadot.network/t/2025-11-25-kusama-parachains-spammening-aftermath/11108/1 |
Aside: JAM puts validators' IP etc on-chain. We could figure out if on-chain is better? Ideally we'd want on-chain to equivocation defesnes anyways. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this somehow interfere in normal mode?
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
5835565
to
1346e76
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Alternatively, we could change the polkadot-sdk/polkadot/node/network/availability-recovery/src/task/strategy/full.rs Line 89 in ca78179
to try connect if we are disconnected. This doesn't have a more granular retry functionality, but a much simpler change. |
We could do this, but I doubt this will make a difference. If the strategy of fetching from backers fails, we fall back to chunk fetching which uses Alex is saying that:
If it were just an issue of not trying to connect, then the chunk recovery should have worked |
@@ -493,6 +501,8 @@ impl ApprovalVotingSubsystem { | |||
metrics, | |||
Arc::new(SystemClock {}), | |||
spawner, | |||
MAX_APPROVAL_RETRIES, | |||
APPROVAL_CHECKING_TIMEOUT / 2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at this constant, I can see it was introduced in paritytech/polkadot#3306, but don't recall what was the purpose of it and of approval_outcome
(apart from Approved
) which we don't seem to even log.
What is the rationale for setting the retry_backoff to 1 minute?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can't be higher than APPROVAL_CHECKING_TIMEOUT
because of this:
match work.timeout(APPROVAL_CHECKING_TIMEOUT).await { |
What is the rationale for setting the retry_backoff to 1 minute?
1 minute seem to satisfy the following requirements for me, it is rare enough to not create too much load and give time for block to be finalized because of no-shows cover-up, it is small enough to actually help with the finality in case the no-show from this particular node is the one holding the approval.
Agree, we should explore this. This would also solve the problem with restarts and empty DHT cache. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving it, but it feels like a workaround for an unreliable/slow authority discovery, which we should fix and then remove this workaround.
I did discussed with @lexnv about speeding authority-discovery by saving the cache on disk to have it available on restart, however even with a speedy authority discovery, I still think this would be a good thing to have for robustness, because it is a fallback when networking calls fail from various other reasons, network calls are expected to fail every now and then. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! @ordian proposed an alternative solution which should live in AD, but I agree that this fix adds some robustness on top of AD.
From the back of my head, a better place to implement PoV retry would is in the availability recovery subsystem. This would be superior in terms of latency (it's push vs pull/poll) as this subsystem could keep track of all these PoVs that failed to be retrieved and retry immediately to fetch chunks as soon as peers connect (or even more complex strategies - like speculative availability).
The only change needed in approval voting would be to notify av recovery that it's no longer interested in some PoV.
WDYT ?
core_index, | ||
session_index, | ||
attempts_remaining: retry.attempts_remaining - 1, | ||
backoff: retry.backoff, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As time passes the chances of connecting to enough peers increases, wouldn't it make sense to decrease the back-off as the retry count increases ? This would help approve candidate faster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reconnecting to peers is in the oder of minutes, so we wouldn't gain much by reducing the backoff, also usually with backoffs you actually want to increase it as the number of attempts increase because you don't want to end up in a situation where you many failed attempts start stampeding, which makes things worse, however we can't increase it here because of this: #6807 (comment), so I think 1min is an acceptable compromise.
I don't think this is particularly better. Latency of transmitting messages between subsystems should be negligible. |
Yes, but why is this a bad thing ? If PoV recovery fails dispute-coordinator currently gives up and it would be more robust to use a retry mechanism.
My proposal doesn't change the concerns of av-recovery, but makes it more robust. It should be easy to pass the retry params in the |
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Alright, I looked a bit on this and moving it in the availability-recovery wouldn't be a straight-forward task, the reason for this is because the implementers of
I don't think that would be better and easier and less risky to understand and implement than the current proposed approach, but it would give us the benefit that disputes could also opt-in to use it. Given the current PR improves the situation, I would be inclined to keep the current approach rather than invest in getting this knob in availability-recovery, let me know what you think! |
All GitHub workflows were cancelled due to failure one of the required jobs. |
The only major downside of the current approach is that for each retry we re-fetch the same chunks from the previous attempt, so worse case scenario - we'd be roughly fetching the PoV 16 times, which increases the bandwidth cost. |
We could modify the I'm not sure however if this would end up being easier to implement/reason about. Btw, we're doing a similar thing in collators for pov-recovery:
|
Another reason, why we can't isolate this logic just in availability-recovery is this
Given, that I would want to have this retry in approval-voting rather sooner than later, I'm also in favour of logging an issue for this generic mechanism and merge the PR, considering the downsides are not that critical in my opinion. |
Signed-off-by: Alexandru Gheorghe <alexandru.gheorghe@parity.io>
Sounds good, let's go 🚀 |
Recovering the POV can fail in situation where the node just restart and the DHT topology wasn't fully discovered yet, so the current node can't connect to most of its Peers. This is bad because for gossiping the assignment you need to be connected to just a few peers, so because we can't approve the candidate and other nodes will see this as a no show.
This becomes bad in the scenario where you've got a lot of nodes restarting at the same time, so you end up having a lot of no-shows in the network that are never covered, in that case it makes sense for nodes to actually retry approving the candidate at a later data in time and retry several times if the block containing the candidate wasn't approved.
TODO