This neat little program gives an easy way to control the running of experiments. You can add arbitrary tasks, which are kept in a database. When run, the output of such a task is copied to the database, for easy review. This way you can quickly add multiple commands, run them, and check the output later, without worrying data will be lost. To use it, python2, sqlite3 and its python bindings are needed.
To use pec, invoke it from your favourite shell with ./pec.py [options]
.
The options and their effects are:
--cli=cmd | Passes the given command to the interactive command interpreter and exits. See the next section for more information on the possible commands; |
--database=file | The database file to use, the default is ./db.sqlite; |
--help | Shows this help message; |
--runner | Starts the daemon which will run the experiments; |
Each long option --opt
also has a short version -o
.
To use the short version with an argument, replace the =
by a space.
If no option is given, an interactive shell is loaded, in which the commands described next can be executed.
Within the program, the following commands are available.
All these commands can be executed from within the interactive shell, or run from the command line with the --cli=...
argument.
add task | Adds the given task to the database; |
execute id | Executes the task(s) with the given id(s); |
help [cmd] | List the available commands or give detailed help about the given command; |
list | Lists all tasks in the database; |
listdone | Lists the tasks in the database that have been completed; |
listtodo | Lists the tasks in the database that have not been executed; |
remove id | Removes the task(s) with the given id(s) from the database; |
reset id | Resets the task(s) with the given id(s), clearing its running information; |
The interactive shell supports basic tab-completion for the commands.
Wherever a command expects a task id, multiple ids can be given by separating them by comma's, and ranges can be specified with dashes, e.g. 3,4-6,8
identifies tasks three, four, five, six and eight.
Lets say we'd want to ping a number of hosts.
We add the tasks by invoking the add
command with the task invocation.
Here, we'll ping some interesting host 5 times:
./pec.py --cli="add ping -c5 interesting_host_or_IP_address"
When we are done adding tasks, we can run them by invoking the following, which will start the daemon, and run the tasks in order of their ids.
For each task, the daemon will spawn a process to execute it, i.e. call ./pec.py -c "execute id"
.
This is to ensure that if the daemon is stopped, the output of the running tasks is still collected.
./pec.py --runner
Note that only the standard output of a task is stored in the database; error messages are written to the terminal.
Should you want to also catch those messages, append something like 2>&1
to the command of your task, to redirect stderr to stdout.
For example, the time
program, which runs a given command and then prints the time it took, outputs its information on stderr.
So to also keep track of the running time of the ping task added above, we'd call:
./pec.py -c "add time (ping -c5 interesting_host_or_IP_address) 2>&1"
By default, the daemon will run two tasks in parallel.
This behaviour is controlled by the thread_count
field of the pec_meta
table in the database, and currently the only way to change it is by directly manipulating that value.
To change it to for example 8, try:
sqlite3 db.sqlite "UPDATE pec_meta SET value=8 WHERE name='thread_count';"