Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create utf.r2py #67

Open
aaaaalbert opened this issue Dec 10, 2015 · 4 comments
Open

Create utf.r2py #67

aaaaalbert opened this issue Dec 10, 2015 · 4 comments

Comments

@aaaaalbert
Copy link
Contributor

Increasingly frequently, we want to run unit tests on devices/platforms where there is either no shell, no Python interpreter we can use, no direct way to ssh in, or a combination of the above. Testing on Android is a typical example, where setting up a proper shell plus Python is a huge pain, and requires you to enable the device's debugging mode in the first place.

For situations like this, there should be a RepyV2 "port" of utf.py that allows us to run RepyV2 unit tests with minimal user interaction. I imagine this to work like so,

albert@%all !> uploaddir all_tests
albert@%all !> start utf.r2py -a
albert@%all !> show log
Log from .....:
test1.r2py       [PASS]
test2.r2py       [PASS]
.....

The program should take care of interpreting the various #pragma directives, just like its Python counterpart, be able to run file/module-level tests, respect the subprocess convention, etc.

@aaaaalbert
Copy link
Contributor Author

Alternatively, how about a seash module?

@aaaaalbert
Copy link
Contributor Author

I'll summarize discussions with @lukpueh about this issue.

  1. This is supposed to be a tool for checking the correctness of code running inside a remote sandbox. Thus, we consider mostly the repy_v2 and seattlelib_v2 tests for the candidate set.
  2. The tool should not attempt to test the nodemanager controlling the test sandbox, nor should it test the interactions performed by seash or other components. These are orthogonal goals (see also UTF: Run nodemanager tests against remote nodemanager? nodemanager#120).
  3. Obviously, the tool won't run pure-Python unit tests, nor will it run setup/subprocess/shutdown scripts as they (mostly) depend on Python functionality.
  4. #pragmas and command-line options should be respected as usual. (Reuse functionality from utf.py where possible!)

What's mentioned above (having a utf.r2py that runs test on its own vessel, or perhaps a seash module for this functionality) has practical ramifications: The usual protocol is that if a test case prints anything, the test is considered to have failed. utf.py can use plain output redirection to check that; if you run on a sandbox, you can either redirect log using a security layer above the test case, or inspect the vessel log after the test ran. The former technique is probably what a utf.r2py would do, the latter is apt for a seash module.

I think that overall, the seash module approach has the benefit of being simpler and less error-prone, as there is less code running on the vessel and the number of things that could go wrong is reduced. Potential non-transparency issues in dylink or encasementlib are avoided. On the negative side, checking for output might get more complicated due to --execinfo and such.

@lukpueh
Copy link
Contributor

lukpueh commented Aug 18, 2016

+1 for the seash module, thinking DRY this sounds more sensible than porting utf.py to repy.

Also I think we could and should take it a step further and decouple the seash module from seash, to run said unit tests programmatically. This could be achieved by updating and using the experimental experimentlib.r2py, where @aaaaalbert has already implemented a lot of the required functionality.
The workflow could be something like this:

  1. Locally run: python utf.py -f <test.r2py> --keys <keys> --vessel <vessel>
  2. utf.py executes python repy.py experimentlib.r2py, e.g. in a subprocess
  3. experimentlib.r2py runs the seash module
  4. the module uploads the the test file to the vessel, executes it, downloads the log and feeds it to utf.py's checking routine - verify_results

And here is what we have to do:

  • Implement upload--run--download-log--reset-log seash module
  • Modify experimentlib.r2py to use seash modules
  • Modify utf.py to accept additional arguments for remote execution and to execute experimentlib.r2py
  • Strip away output generated by verbosity options, e.g. --execinfo
  • Somehow figure out which part of the output was stdout and which was stderr (this might be difficult)
  • Make the output available to utf.py's verify_results function

@aaaaalbert
Copy link
Contributor Author

aaaaalbert commented Aug 19, 2016

Thumbs up for thinking this through from the user's perspective too!

I should add that "seash modules" are really implementations of additional seash commands, not modules in a Python sense. Thus, you can't really use a seash module in other code; I did however write a few seash modules to pull in functionality from other code! (And indeed, seash's overall engineering is a DRY problem in itself, see SeattleTestbed/seash#103).

The good news is that we don't require any seash module anyway --- experimentlib.r2py has most of the required functions already, so we could use it for your proposed modifications to utf.py almost any time.

aaaaalbert added a commit to aaaaalbert/repy-doodles that referenced this issue Feb 6, 2017
This is a doodle in reference to SeattleTestbed/utf#67, "Create utf.r2py".
It provides `testrunner.r2py`, a RepyV2 script that runs its first
callarg inside a virtual namespace and performs `#pragma` checks
similar to what `utf.py` does for unit test cases.

The `runtests` seash module should take care of uploading a directory
full of tests, and then run each test case on the target vessel(s)
using `testrunner`. Ideally, this results in a vessel log full of
"(test case name) PASSED" messages. In less ideal cases, errors are
shown.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants