-
-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cop idea: disallow calling have_no_XXX
immediately after visit
(Capybara)
#7
Comments
have_no_XXX
immediately after visit
have_no_XXX
immediately after visit
(Capybara)
Resolve: #7 This PR adds a new `Capybara/NegationMatcherAfterVisit` cop.
Resolve: #7 This PR adds a new `Capybara/NegationMatcherAfterVisit` cop.
I wonder what this means exactly:
As you say, "it would sometimes pass" means that it would fail most of the time. But what is the purpose of a spec that fails most of the time? Adding a positive expectation before a negative one doesn't mean that the negative would always be correct. Not to mention, the same behaviour applies to Wondering if using |
This issue that this cop aims to solve is that when a negative check is performed on elements that are dynamically loaded, there is a possibility that the check will always succeed, even if the loading is slow. In other words, when the loading is slow, the check returns success immediately, and the motivation is to prevent not noticing failures in testing. |
# Given use of a driver where the page is loaded when visit returns
# and that Capybara.predicates_wait is `true`
# consider a page where the `a` tag is removed through AJAX after 1s
visit(some_path)
!page.has_xpath?('a') # is false
page.has_no_xpath?('a') # is true
Why is ‘has_no_xxx’ unsafe then? |
@pirj check my first example in my original post. The second line could execute before the first one has loaded anything. Any empty page won't have a link and that check would pass. That obviously is not correct because |
So we are talking about the following risk:
I agree with the suggestion. But a system built this way feels whacky. As a devil advocate, we can imagine some weird JS run on timer that would add and remove elements from the page (making some async server requests along the way) and in that case there’s no “complete” state of the page when we can make assertions against it. |
The reason I wanted to implement this cop was to detect offenses in cases like the following. In an application, the buttons to be displayed change depending on the user's permissions. context 'when current user has admin role' do
let(:user) { create(:user, role: :admin) }
it 'shows management button' do
visit foo_page_url
expect(page).to have_button 'Management'
end
end
context 'when current user has not admin role' do
let(:user) { create(:user, role: :common) }
it 'does not show management button' do
visit foo_page_url
expect(page).not_to have_button 'Management' # If the display is slow here, it could always succeed 💥
end
end |
Yep, exactly my point - the second case has a bad race-condition that pretty much leads to your test not testing what you meant - it practically tests nothing (also, I think |
Resolve: #7 This PR adds a new `Capybara/NegationMatcherAfterVisit` cop.
Note the following code:
Will pass... sometimes. It's a race-condition as the page may or may not have loaded when execution reaches the second line. It would be nice to have a cop that checks for these cases.
Not sure if it should also handle:
In some sense the page has already (at least partially) loaded because of the
within
check but on the other hand perhaps always having a positive check before the negative for sanity is better?The text was updated successfully, but these errors were encountered: