Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

i.eodag: add testsuite #1163

Merged
merged 18 commits into from
Aug 22, 2024
Merged

i.eodag: add testsuite #1163

merged 18 commits into from
Aug 22, 2024

Conversation

HamedElgizery
Copy link
Contributor

No description provided.

src/imagery/i.eodag/testsuite/test_eodag.py Outdated Show resolved Hide resolved
src/imagery/i.eodag/testsuite/test_eodag.py Outdated Show resolved Hide resolved
src/imagery/i.eodag/testsuite/test_eodag.py Outdated Show resolved Hide resolved
Copy link
Member

@ninsbl ninsbl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very good to see the testsuite added! My only concern is that some tests are a bit too strict, and easily fail with subtle changes on the provider side, even if those changes do not impact the use / users of the module... And then the tests would have to be adjusted every time such a change occures.
I would thus lessen some criteria, or what do @veroandreo and @lucadelu

src/imagery/i.eodag/testsuite/test_eodag.py Outdated Show resolved Hide resolved
src/imagery/i.eodag/testsuite/test_eodag.py Outdated Show resolved Hide resolved
quiet=True,
stdout_=PIPE,
)
self.assertEqual(i_eodag.outputs["stdout"].value.strip(), output)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we have too strict test-criteria (here assertEqual for the entire output, or an md5 test for the output file, e.g. reprocessing of the products with a new processing baseline would cause the test to fail. I would probably rather test if the requested product type is returned, the sensing time is within start and end and stuff like that...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, just in case something change on their side (highly likely based on 10+ years of the Sentinel data being out), I'd check if the time is within start and end, and if the product starts with S2 or so... Question is what happens with these tests if the site is down?? They will all fail and it is not a problem of the tool but on the server side...

Copy link
Contributor Author

@HamedElgizery HamedElgizery Aug 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question is what happens with these tests if the site is down?? They will all fail and it is not a problem of the tool but on the server side...

If we unify all the tests to use "peps" as the provider, maybe we can run a pretty basic test of the module in the setUpClass method, and then save whether this test fails or not in flag. If it fails, then we can always return success in all of the tests, because the server is down or connection could not be established for any reason, and if it passes we run the tests normally.

src/imagery/i.eodag/testsuite/test_eodag.py Show resolved Hide resolved
src/imagery/i.eodag/testsuite/test_eodag.py Outdated Show resolved Hide resolved
Comment on lines 295 to 297
# i.eodag -l provider=peps producttype=S2_MSI_L1C start=2022-05-01 end=2022-06-01 footprints=s2_footprints
# This is not working...?
# v.import seems to complain
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the background for these lines?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you try to run this test case individually, it will give you an error, or at least that what happens for me.
But if you run the same command using the terminal it runs fine. So I am not sure what could be the issue here.

i.eodag -l provider=peps producttype=S2_MSI_L1C start=2022-05-01 end=2022-06-01 map=durham footprints=s2_footprints

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it because in your terminal, you already have something downloaded or extra env vars?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @echoix
I have tried to switch devices and to use a virtual environment to test if this is the issue, but still no luck, maybe there is some other factor that I couldn't eliminate when testing this?

Anyhow, the error seems to be that it couldn't create a temporary location...
Error from v.in.ogr:
ERROR: Unable to create new location <tmp_v_import_location_debian_hp_6941>

src/imagery/i.eodag/testsuite/test_eodag.py Show resolved Hide resolved
src/imagery/i.eodag/testsuite/test_eodag.py Outdated Show resolved Hide resolved
Copy link
Contributor

@veroandreo veroandreo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @ninsbl that tests might be too strict leading them to fail because of changes in the providers and not because the tool is not working. So, I wonder if instead of comparing the stdout output we could check:

  • for the case of limit option if the number of scenes found is equal or less than limit, but higher than 0
  • for the case of queryables, if they exist in the json output
  • for cloud cover, if the values are le than cloudcover

I'm not a specialist on this, does that make any sense to you guys? @ninsbl @lucadelu ?

Docstrings need to be explicit about what is being tested.

There's a mismatch among provider in the comments and provider used in the commands. Since creodias is a special case requiring TOTP, I'd say we stick to peps instead, no?

src/imagery/i.eodag/testsuite/test_eodag.py Outdated Show resolved Hide resolved
quiet=True,
stdout_=PIPE,
)
self.assertEqual(i_eodag.outputs["stdout"].value.strip(), output)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, just in case something change on their side (highly likely based on 10+ years of the Sentinel data being out), I'd check if the time is within start and end, and if the product starts with S2 or so... Question is what happens with these tests if the site is down?? They will all fail and it is not a problem of the tool but on the server side...

src/imagery/i.eodag/testsuite/test_eodag.py Outdated Show resolved Hide resolved
@HamedElgizery HamedElgizery marked this pull request as ready for review August 18, 2024 23:08
@lucadelu
Copy link
Contributor

I agree with @ninsbl that tests might be too strict leading them to fail because of changes in the providers and not because the tool is not working. So, I wonder if instead of comparing the stdout output we could check:

* for the case of limit option if the number of scenes found is equal or less than limit, but higher than 0

* for the case of queryables, if they exist in the json output

* for cloud cover, if the values are le than cloudcover

I'm not a specialist on this, does that make any sense to you guys? @ninsbl @lucadelu ?

Docstrings need to be explicit about what is being tested.

There's a mismatch among provider in the comments and provider used in the commands. Since creodias is a special case requiring TOTP, I'd say we stick to peps instead, no?

It make sense, however we should trust on provider, if it doesn't work our tests will fail. This was reported by @marisn in this comment #1033 (comment)

Copy link
Member

@ninsbl ninsbl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The approach seems very good.
I do have some suggestions for things you can consider . They are not tested, so please handle with care...

src/imagery/i.eodag/testsuite/test_eodag.py Show resolved Hide resolved
src/imagery/i.eodag/testsuite/test_eodag.py Outdated Show resolved Hide resolved
src/imagery/i.eodag/testsuite/test_eodag.py Show resolved Hide resolved
src/imagery/i.eodag/testsuite/test_eodag.py Show resolved Hide resolved
src/imagery/i.eodag/testsuite/test_eodag.py Show resolved Hide resolved
src/imagery/i.eodag/testsuite/test_eodag.py Outdated Show resolved Hide resolved
@HamedElgizery
Copy link
Contributor Author

The approach seems very good. I do have some suggestions for things you can consider . They are not tested, so please handle with care...

Thanks @ninsbl, I have applied the suggestions from the code review.
I had to use the skipTest method instead of the decorators to be able to access the available_providers dictionary.

@lucadelu
Copy link
Contributor

I run the tests and I got just one failure because results directory doesn't exist. I suggest to write the file into actual directory and later remove it.

Traceback (most recent call last):
  File "/home/lucadelu/github/grass-addons/src/imagery/i.eodag/testsuite/test_eodag.py", line 304, in test_save_result_geojson
    self.assertModule(
  File "/usr/lib/grass82/etc/python/grass/gunittest/case.py", line 1422, in assertModule
    self.fail(self._formatMessage(msg, stdmsg))
AssertionError: Running <i.eodag> module ended with non-zero return code (1)
Called (Python): i.eodag(producttype='S2_MSI_L1C', map='durham', limit=50, provider='peps', sort=['cloudcover', 'ingestiondate', 'footprint'], order='asc', start='2022-05-01', end='2022-06-01', save='results/search_s2.geojson', timeout=300, wait=2, flags='l', overwrite=True, quiet=True)
Called (Bash): i.eodag producttype=S2_MSI_L1C map=durham limit=50 provider=peps sort=cloudcover,ingestiondate,footprint order=asc start=2022-05-01 end=2022-06-01 timeout=300 wait=2 save=results/search_s2.geojson -l --o --q
See the following errors:
WARNING: Experimental Version...
WARNING: This module is still under development, and its behaviour is not
         guaranteed to be reliable
Traceback (most recent call last):
  File "/home/lucadelu/.grass8/addons/scripts/i.eodag", line 1369, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/lucadelu/.grass8/addons/scripts/i.eodag", line 1296, in main
    save_search_result(search_result, options["save"])
  File "/home/lucadelu/.grass8/addons/scripts/i.eodag", line 976, in save_search_result
    dag.serialize(search_result, filename=file_name)
  File "/usr/local/lib/python3.11/dist-packages/eodag/api/core.py", line 1934, in serialize
    with open(filename, "w") as fh:
         ^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'results/search_s2.geojson'

Copy link
Member

@ninsbl ninsbl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes look good!

Notice that in addition to the error @lucadelu gets locally, tests are failing early in CI because eodag is not installed for the github action.

The latter can be fixed by adding eodag here:

.github/workflows/extra_requirements.txt

https://github.com/OSGeo/grass-addons/actions/runs/10481142386/job/29030104572?pr=1163#step:17:164

That should be the last hurdle for this PR!

@ninsbl
Copy link
Member

ninsbl commented Aug 21, 2024

Tests are failing: https://github.com/OSGeo/grass-addons/actions/runs/10487387701/job/29050543474#step:17:155
We may have to install cs2cs but there are other reasons for test-failures too...

@ninsbl
Copy link
Member

ninsbl commented Aug 21, 2024

Please try adding proj-bin in: .github/workflows/apt.txt That should address the m.proj issue in the failing test...

@HamedElgizery
Copy link
Contributor Author

Please try adding proj-bin in: .github/workflows/apt.txt That should address the m.proj issue in the failing test...

Thanks that passed the checks.

@ninsbl
Copy link
Member

ninsbl commented Aug 22, 2024

Hi @HamedElgizery , unfortunately, tests with the current release branch (and Python 3.10), some eodag tests fail due to isoformat parsing (with time zone and microseconds)... See: https://github.com/OSGeo/grass-addons/actions/runs/10498051963/job/29082199054?pr=1163#step:17:155

@HamedElgizery
Copy link
Contributor Author

HamedElgizery commented Aug 22, 2024

Hi @HamedElgizery , unfortunately, tests with the current release branch (and Python 3.10), some eodag tests fail due to isoformat parsing (with time zone and microseconds)... See: https://github.com/OSGeo/grass-addons/actions/runs/10498051963/job/29082199054?pr=1163#step:17:155

I think it passes now (I found no traces of a failing i.eodag test). The problem was because of the version of the datetime library in Python 3.10. I had to slightly change the format of the timestamp to be compatible with it, by removing the 'Z' character from the end of the timestamp.

@ninsbl ninsbl merged commit 2f36e10 into OSGeo:grass8 Aug 22, 2024
7 checks passed
@HamedElgizery HamedElgizery deleted the eodag_testsuite branch August 22, 2024 14:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants