-
Notifications
You must be signed in to change notification settings - Fork 2
Running the pipelines
Arnaud Ceol edited this page Mar 23, 2016
·
4 revisions
Once a new analysis is created from the web pages (primary or secondary) an entry is added in the job_list table.
A python script (scripts/HTSflowSubmitter.R) launched by the user extracts from this table all analyses and launch the analyses in parallel on a cluster or sequentially on the same computer according to the configuration of the pipeline/BatchJobs.R script. This script takes as argument the path to the HTS flow config file.
Technical note: the list of jobs to run is extracted from the database. Each time the script starts, a lock file (.lock) is created in the user directory and removed before the end of the script. The run script will skip everything if the lock file already exists.