Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[test-triton.sh] Add an option to unskip all pytest.skip() calls #987

Closed
vlad-penkin opened this issue Apr 26, 2024 · 12 comments · Fixed by #1345
Closed

[test-triton.sh] Add an option to unskip all pytest.skip() calls #987

vlad-penkin opened this issue Apr 26, 2024 · 12 comments · Fixed by #1345
Assignees
Labels
Milestone

Comments

@vlad-penkin
Copy link
Contributor

Implementation approach

  1. Create a temporary copy of the python/test folder
  2. Replace all pytest.skip() calls by pass by developing a script which use python ast module and make changes to the temporary copy
  3. Run UT on a temporary copy
  4. Unskipped run may cause slowness / hangs like behavior, a special watcher needs to be implemented. Recommended approach is to use pytest-timeout
@AshburnLee
Copy link
Contributor

AshburnLee commented Jun 11, 2024

Update:

  • Create a temporary copy of the folder
  • Replace all pytest.skip() calls by pass by developing a script which use python ast module and make changes to the temporary copy
  • Run UT on a temporary copy
  • Unskipped run may cause slowness / hangs like behavior, a special watcher needs to be implemented. Recommended approach is to use pytest-timeout
  • respond to reviews

@AshburnLee
Copy link
Contributor

AshburnLee commented Jun 13, 2024

Update:

  • Create a temporary copy of the folder
  • Replace all pytest.skip() calls by pass by developing a script which use python ast module and make changes to the temporary copy
  • Run UT on a temporary copy
  • Unskipped run may cause slowness / hangs like behavior, a special watcher needs to be implemented. Recommended approach is to use pytest-timeout
  • respond to reviews

@AshburnLee
Copy link
Contributor

#1196 is not target for this issue

@vlad-penkin
Copy link
Contributor Author

@AshburnLee let's implement unskips via conftest.py. Such approach will not require any source code manipulations / changes.

To unskip the tests, we can add the following code in conftest.py:

import pytest


def pytest_configure(config):

    def unskip(reason=None, allow_module_level=False):
        pass

    skip_f = pytest.skip
    pytest.skip = unskip
    config._skip_f = skip_f


def pytest_unconfigure(config):
    pytest.skip = config._skip_f

To run pytest with custom 'conftest.py' we can use --confcutdir=<path_to_folder_with_conftest.py> pytest parameter.

@AshburnLee
Copy link
Contributor

AshburnLee commented Jun 14, 2024

It seems a better solution, let me try. thanks @vlad-penkin

@AshburnLee
Copy link
Contributor

AshburnLee commented Jun 14, 2024

@vlad-penkin I tried your way, it partially works.
--confcutdir is not used like you described. --confcutdir allows you to specify a directory where pytest will stop searching upwards for configuration files. And conftest.py is more like a hook.
So I think conftest.py & --confcutdir are two different things that is not supposed to used together the way you described.

conftest.py will always take effect to test scripts in the current dir & all the sub-dir.
So Here is my improved implementation: #1345

Please review, thanks

@AshburnLee
Copy link
Contributor

@pbchekin I also supported this on CI, please have a look at .github/workflows/build-test.yml
Thanks~

@AshburnLee
Copy link
Contributor

AshburnLee commented Jun 19, 2024

Update:

Progress: Task starts on 6/11, review starts on 6/14, (lasted 4 workdays)
Risk: pytest-timeout works not fully as expected, spent 1.5 workdays experimenting on it.

@AshburnLee
Copy link
Contributor

AshburnLee commented Jun 22, 2024

Update:

Risk: unskip mode and skip mode(default) have different number of total test cases, (passed and xfail in language cases)

unskip vs default

interpreter: passed: 6222, failed: 31, skipped: 0, xfailed: 703, total: 6956, fixme: 1, pass rate (w/o xfailed): 99.5%
interpreter: passed: 6221, failed: 0, skipped: 32, xfailed: 703, total: 6956, fixme: 1, pass rate (w/o xfailed): 99.49%

  • correct
  • 31 failed & 1 passed <= 32 skipped [1 passed: test_precise_math[1-tl.math.sqrt_rn(x)-tl.math.sqrt(x.to(tl.float64)).to(tl.float32)]]
  • total == total 6956 == 6956
  • passed >= passed 6222 >= 6221

unskip vs default

operators: passed: 961, failed: 0, skipped: 3, xfailed: 0, total: 964, fixme: 0, pass rate (w/o xfailed): 99.69% # done 964-3=961
operators: passed: 958, failed: 0, skipped: 3, xfailed: 0, total: 961, fixme: 0, pass rate (w/o xfailed): 99.69% # 961

  • FIX: remove read from select file
  • total == total 964-3 == 961
  • passed >= passed 961 >= 958

unskip vs default

subprocess: passed: 22, failed: 12, skipped: 12, xfailed: 0, total: 46, fixme: 0, pass rate (w/o xfailed): 47.83%
subprocess: passed: 22, failed: 0, skipped: 12, xfailed: 0, total: 34, fixme: 0, pass rate (w/o xfailed): 64.71%

  • FIX: remove read from select file
  • total == total 46-12 == 34
  • passed >= passed 22 >= 22

unskip vs default

language: passed: 9705, failed: 68, skipped: 181, xfailed: 423, total: 10377

  • 180 skipped come from skiplist. FIX:, FIX: remove read from select file, the other skip comes from pytest.mark.skipif,So it should be skipped: 1

language: passed: 10332, failed: 0, skipped: 349, xfailed: 521, total: 11202

  • skipped 349 == 169(actual number of skipped in log) + 180(skiplist) NOTE:1(pytest.mark.skipif) [extra 1: test_reproducer]

Todo:

Investigate what was happening in language.

@AshburnLee
Copy link
Contributor

AshburnLee commented Jun 26, 2024

  • Conclusion:
    In unskip mode, some un-skipped cases cause core-dump, which terminates the Pytest session and results in the current session stopped at NOT 100%, so there are cases that will not be executed. That is the root cause.

  • Fix
    I got fix commit in the PR, it is validated locally, now this issue has been fixed.

  • Addition
    There are 304 core-dump cases that can be uploaded if needed.
    LTS & Rolling have the same core-dump cases.

@AshburnLee
Copy link
Contributor

AshburnLee commented Jun 27, 2024

@pbchekin
Copy link
Contributor

We don't need the list of failed tests in (private) wiki. Just run the workflow and check the "Upload test reports" checkbox.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants