-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle gappy state with a single snapshot, attempt 3 #363
Conversation
I don't fully remember the details, but may as well pluck this from the previous PR.
sync3/handler/handler.go
Outdated
h.Dispatcher.OnInvalidateRoom(p.RoomID, joins, invites) | ||
|
||
// 3. Destroy involved users' caches. | ||
for _, userID := range involvedUsers { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
involvedUsers
is potentially very large (think Matrix HQ). It would be better to cut this down to the connected users first, then continue. Otherwise we will spin a lot of cycles/locks on users not even on the upstream HS...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be better to cut this down to the connected users first, then continue.
Does this run the risk of introducing a race?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And 9aa8f55 🤦
acquire mutex once, rather than N times
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please do not merge this yet. I want to land #367 first as a safe deploy before trying a riskier bigger change.
A third iteration of "deal with the gappy poll problem".
I pulled out all the independent bits from #329 to other PRs. What was left fell into two categories: a) the original logic for trying to push out live updates with the gappy poll changes; and b) the logic for nuking connections. This PR consists of b) but not a).
Ancestry: #310 -> #329 -> this.
Closes #294.
Closes #232.