-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Add race option to detect raced codes #10899
base: main
Are you sure you want to change the base?
Conversation
/assign |
/area testing |
@fabriziopandini: The specified target(s) for
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/test pull-cluster-api-e2e-main |
Overall lgtm, running E2E to validate changes in the in memory provider |
/lgtm |
LGTM label has been added. Git tree hash: 5931a1618319937c284bd75a36f1709a484a6c7e
|
@@ -50,10 +50,10 @@ func (c *cache) startSyncer(ctx context.Context) error { | |||
c.syncQueue.ShutDown() | |||
}() | |||
|
|||
syncLoopStarted := false | |||
syncLoopStarted := make(chan struct{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I would drop this entirely.
This now just checks that we got until l.56. I'm not sure I understand why we are waiting for that. At this point the only guarantee is that the log was written, which doesn't make sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me. I fixed it.
@sivchari 2 smaller findings. Sorry for the misunderstanding here: #10899 (comment) |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
When I remove the syncLoopStarted, the other race errors are spawned. |
Hm not sure why that is, but these errors look entirely unrelated (they are even in a different go module) |
a93aa1d
to
ddd99a1
Compare
@sivchari Let's please split this PR into two. In the first PR, let's add the flag to all unit tests for which it just works without further modification. And then let's try to address everything else in a second PR. I would really like to have the flag in the |
Opened a PR: #11207 |
@sivchari #11207 will enable the race detector for the main tests. Once #11207 is merged you can rebase and we can see if we want to add race detector everywhere. I'm mostly worried about slowing down the test ProwJob. But it's fine if the other test targets are slowing done the job only a bit. |
ddd99a1
to
a832abf
Compare
@sbueringer |
@sivchari Can you re-add the |
@sbueringer
I'm not sure what you mean. |
No worries. -race is only set on some test targets. A previous version of this PR was setting it on all. I would like to set it on all test targets again that don't have it at the moment |
You mean, you want to remove !race tag in all test targets, right ? |
No, I want to add A previous version of your PR already had it correctly on all targets. My PR only added it to the most important test targets |
Okay, I got it. Sorry for taking your time. |
Thx! No problem at all and no rush! 😀 |
Signed-off-by: sivchari <shibuuuu5@gmail.com>
Signed-off-by: sivchari <shibuuuu5@gmail.com>
Signed-off-by: sivchari <shibuuuu5@gmail.com>
Signed-off-by: sivchari <shibuuuu5@gmail.com>
caa47dc
to
4d91c48
Compare
I re-added the -race to each target |
/test pull-cluster-api-test-main |
@@ -50,10 +50,8 @@ func (c *cache) startSyncer(ctx context.Context) error { | |||
c.syncQueue.ShutDown() | |||
}() | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sivchari I took another look. Let's keep this but make it concurrency safe
l.53
var syncLoopStarted atomic.Bool
l.56
syncLoopStarted.Store(true)
l.85
if !syncLoopStarted.Load() {
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SubhasmitaSw
Thank you for commenting about it. Surely, it might be right, but I think it's not necessary. In l.83, we check if all workers starts by atomic.Load(&workers) < int64(c.syncConcurrency) and I believe it's enough to achieve to block data race. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The check in l.83++ does not include the syncloop (l.53-l.63)
What this PR does / why we need it:
I added -race option to go test command. This option can find the raced code.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #