Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing Tolerance.format_number #1380

Merged
merged 8 commits into from
Jul 18, 2024
Merged

Conversation

yck011522
Copy link
Contributor

This is an investigation about TOL.format_number behavior on GitHub CI.

The behavior of the function seems to be unstable across environments.

What type of change is this?

  • Bug fix in a backwards-compatible manner.
  • New feature in a backwards-compatible manner.
  • Breaking change: bug fix or new feature that involve incompatible API changes.
  • Other (e.g. doc update, configuration, etc)

Checklist

Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code.

  • I added a line to the CHANGELOG.md file in the Unreleased section under the most fitting heading (e.g. Added, Changed, Removed).
  • I ran all tests on my computer and it's all green (i.e. invoke test).
  • I ran lint on my computer and there are no errors (i.e. invoke lint).
  • I added new functions/classes and made them available on a second-level import, e.g. compas.datastructures.Mesh.
  • I have added tests that prove my fix is effective or that my feature works.
  • I have added necessary documentation (if appropriate)

Copy link

codecov bot commented Jul 10, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 60.19%. Comparing base (b23bf7b) to head (31da190).
Report is 22 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1380      +/-   ##
==========================================
+ Coverage   60.11%   60.19%   +0.08%     
==========================================
  Files         207      207              
  Lines       22234    22248      +14     
==========================================
+ Hits        13366    13393      +27     
+ Misses       8868     8855      -13     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@yck011522
Copy link
Contributor Author

yck011522 commented Jul 10, 2024

@tomvanmele
I would like to ask about the default value used for TOL.precision.
I assumed it is 3 from Tolerance.PRECISION, however, if you look at this test file I created, which failed, the precision is not 3 but 12 in the context of build (macos-latest, 3.11) and I believe in some other context too. However, in my local environment (Windows, py = 3.11), it is = 3 as I would expect it to be.

build (macos-latest, 3.11)

def test_tolerance_default_tolerance():
    assert TOL.precision == Tolerance.PRECISION
    assert TOL.precision == 3

def test_tolerance_format_number_with_default_precision():
    assert TOL.format_number(0) == "0.000"
    assert TOL.format_number(0.5) == "0.500"
    assert TOL.format_number(float(0)) == "0.000"

Error:

=================================== FAILURES ===================================
_______________________ test_tolerance_default_tolerance _______________________
tests/compas/test_tolerance.py:7: in test_tolerance_default_tolerance
    assert TOL.precision == Tolerance.PRECISION
E   AssertionError: assert 12 == 3
E    +  where 12 = Tolerance(unit='M', absolute=1e-09, relative=1e-06, angular=1e-06, approximation=0.001, precision=12, lineardeflection=0.001, angulardeflection=0.1).precision
E    +  and   3 = Tolerance.PRECISION
_____________ test_tolerance_format_number_with_default_precision ______________
tests/compas/test_tolerance.py:18: in test_tolerance_format_number_with_default_precision
    assert TOL.format_number(0) == "0.000"
E   AssertionError: assert '0.000000000000' == '0.000'
E     
E     - 0.000
E     + 0.000000000000

@tomvanmele
Copy link
Member

this is happening because one of the earlier tests (test_gltf.py) sets TOL.precision = 12. since the tolerance object is a singleton such that changes are always package-wide, this affects your tests as well. this will be solved if we set the value back to default at the end of the gltf test script...

@tomvanmele
Copy link
Member

try adding this to the end of test_gltf.py to see if it solves the problem.

TOL.precision = TOL.PRECISION

@tomvanmele
Copy link
Member

and also in test_stl.py

@yck011522 yck011522 marked this pull request as ready for review July 11, 2024 06:40
@yck011522
Copy link
Contributor Author

@tomvanmele

Yea, it works once I change the two files as you say. Can you see if you want to merge this PR including the test_tolerance.py file I created. Best



# Reset the precision to its default value
TOL.precision = 3
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this should be = TOL.PRECISION

Copy link
Member

@tomvanmele tomvanmele left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yck011522 yck011522 merged commit 82af17e into main Jul 18, 2024
18 checks passed
@yck011522 yck011522 deleted the yck011522/test_tol_format_number branch July 18, 2024 06:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants