diff --git a/.github/workflows/build-windows-executable-app.yaml b/.github/workflows/build-windows-executable-app.yaml index f49480c..79c0d99 100644 --- a/.github/workflows/build-windows-executable-app.yaml +++ b/.github/workflows/build-windows-executable-app.yaml @@ -63,12 +63,16 @@ jobs: key: ${{ runner.os }}-contrib3 - name: Load contrib build - if: steps.cache-contrib-win.outputs.cache-hit != 'true' + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | - cd OpenMS/contrib - curl -o contribbld.tar.gz https://abibuilder.cs.uni-tuebingen.de/archive/openms/contrib/windows/x64/msvc-14.2/contrib_build.tar.gz - tar -xzf contribbld.tar.gz - rm contribbld.tar.gz + cd OpenMS/contrib + # Download the file using the URL fetched from GitHub + gh release download -R OpenMS/contrib --pattern 'contrib_build-Windows.tar.gz' + # Extract the archive + 7z x -so contrib_build-Windows.tar.gz | 7z x -si -ttar + rm contrib_build-Windows.tar.gz + ls - name: Setup ccache cache uses: actions/cache@v3 diff --git a/README.md b/README.md index c71a222..06f324d 100644 --- a/README.md +++ b/README.md @@ -1,73 +1,27 @@ -# OpenMS streamlit template [![Open Template!](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://abi-services.cs.uni-tuebingen.de/streamlit-template/) +# OpenMS streamlit template -This is a template app for OpenMS workflows in a web application. +[![Open Template!](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://abi-services.cs.uni-tuebingen.de/streamlit-template/) -## Requires - -Python <= 3.10 +This repository contains a template app for OpenMS workflows in a web application using the **streamlit** framework. It serves as a foundation for apps ranging from simple workflows with **pyOpenMS** to complex workflows utilizing **OpenMS TOPP tools** with parallel execution. It includes solutions for handling user data and parameters in workspaces as well as deployment with docker-compose. ## Features - Workspaces for user data with unique shareable IDs +- Persistent parameters and input files within a workspace +- local and online mode - Captcha control -- Deployment with Docker -- Packaged executables for Windows (built via GitHub action) -- Automatic removal of unused workspaces -- Parameters persist within a workspace - -## Key concepts - -**Workspaces** - -Directories where all data generated and uploaded can be stored as well as a workspace specific parameter file. - -**Run the app locally and online** - -Launching the app with the `local` argument let's the user create/remove workspaces. In the online the user gets a workspace with a specific ID. - -**Parameters** - -Parameters (defaults in `assets/default-params.json`) store changing parameters for each workspace. Parameters are loaded via the page_setup function at the start of each page. To track a widget variable via parameters simply give them a key and add a matching entry in the default parameters file. Initialize a widget value from the params dictionary. - -```python -params = page_setup() - -st.number_input(label="x dimension", min_value=1, max_value=20, -value=params["example-y-dimension"], step=1, key="example-y-dimension") - -save_params() -``` +- Packaged executables for Windows +- framework for workflows with OpenMS TOPP tools +- Deployment [with docker-compose](https://github.com/OpenMS/streamlit-deployment) +## Documentation -## Code structure +Documentation for **users** and **developers** is included as pages in [this template app](https://abi-services.cs.uni-tuebingen.de/streamlit-template/), indicated by the πŸ“– icon. -- **Pages** must be placed in the `pages` directory. -- It is recommended to use a separate file for defining functions per page in the `src` directory. -- The `src/common.py` file contains a set of useful functions for common use (e.g. rendering a table with download button). +## References -## App layout +- Pfeuffer, J., Bielow, C., Wein, S. et al. OpenMS 3 enables reproducible analysis of large-scale mass spectrometry data. Nat Methods 21, 365–367 (2024). [https://doi.org/10.1038/s41592-024-02197-7](https://doi.org/10.1038/s41592-024-02197-7) -- Main page contains explanatory text on how to use the app and a workspace selector. -- Sidebar contains the OpenMS logo, settings panel and a workspace indicator. The main page contains a workspace selector as well. -- See pages in the template app for example use cases. The content of this app serves as a documentation. +- RΓΆst HL, Schmitt U, Aebersold R, MalmstrΓΆm L. pyOpenMS: a Python-based interface to the OpenMS mass-spectrometry algorithm library. Proteomics. 2014 Jan;14(1):74-7. [https://doi.org/10.1002/pmic.201300246](https://doi.org/10.1002/pmic.201300246). PMID: [24420968](https://pubmed.ncbi.nlm.nih.gov/24420968/). -## Modify the template to build your own app -- in `src/common.py` update the name of your app and the repository name -- in `clean-up-workspaces.py` update the name of the workspaces directory to `/workspaces-` - - e.g. for the streamlit template it's "/workspaces-streamlit-template" -- chose one of the Dockerfiles depending on your use case: - - `Dockerfile` build OpenMS including TOPP tools - - `Dockerfile_simple` uses pyOpenMS only -- update the Dockerfile: - - with the `GITHUB_USER` owning the streamlit app repository - - with the `GITHUB_REPO` name of the streamlit app repository - - if your main streamlit file is not called `app.py` modfify the following line - - `RUN echo "mamba run --no-capture-output -n streamlit-env streamlit run app.py" >> /app/entrypoint.sh` -- update Python package dependency files: - - `requirements.txt` if using `Dockerfile_simple` - - `environment.yml` if using `Dockerfile` -- update `README.md` -- for the Windows executable package: - - update `datas` in `run_app_temp.spec` with the Python packages required for your app - - update main streamlit file name to run in `run_app.py` \ No newline at end of file diff --git a/app.py b/app.py index 90e9769..270fe3e 100644 --- a/app.py +++ b/app.py @@ -33,10 +33,24 @@ def main(): """ Display main page content. """ - st.title("Template App") - st.markdown("## A template for an OpenMS streamlit app.") + st.title("OpenMS Streamlit Template App") + st.info(""" +This repository contains a template app for OpenMS workflows in a web application using the **streamlit** framework. It serves as a foundation for apps ranging from simple workflows with **pyOpenMS** to complex workflows utilizing **OpenMS TOPP tools** with parallel execution. It includes solutions for handling user data and parameters in workspaces as well as deployment with docker-compose. +""") + st.subheader("Features") + st.markdown(""" +- Workspaces for user data with unique shareable IDs +- Persistent parameters and input files within a workspace +- Captcha control +- Packaged executables for Windows +- framework for workflows with OpenMS TOPP tools +- Deployment [with docker-compose](https://github.com/OpenMS/streamlit-deployment) +""") + st.subheader("Quick Start") if Path("OpenMS-App.zip").exists(): - st.markdown("## Installation") + st.markdow(""" +Download the latest version for Windows here by clicking the button below. +""") with open("OpenMS-App.zip", "rb") as file: st.download_button( label="Download for Windows", @@ -45,6 +59,14 @@ def main(): mime="archive/zip", type="primary", ) + st.markdown(""" +Extract the zip file and run the executable (.exe) file to launch the app. Since every dependency is compressed and packacked the app will take a while to launch (up to one minute). +""") + st.markdown(""" +Check out the documentation for **users** and **developers** is included as pages indicated by the πŸ“– icon + +Try the example pages **πŸ“ mzML file upload**, **πŸ‘€ visualization** and **example workflows**. +""") save_params(params) diff --git "a/pages/0_\360\237\223\226_Installation.py" "b/pages/0_\360\237\223\226_Installation.py" new file mode 100644 index 0000000..82070e0 --- /dev/null +++ "b/pages/0_\360\237\223\226_Installation.py" @@ -0,0 +1,61 @@ +import streamlit as st +from pathlib import Path +from src.common import page_setup + +page_setup() + +if Path("OpenMS-App.zip").exists(): + st.markdown(""" +Download the latest version for **Windows** here clicking the button below. +""") + with open("OpenMS-App.zip", "rb") as file: + st.download_button( + label="Download for Windows", + data=file, + file_name="OpenMS-App.zip", + mime="archive/zip", + type="primary", + ) + +st.markdown(""" +# Installation + +## Windows + +The app is available as pre-packaged Windows executable, including all dependencies. + +The windows executable is built by a GitHub action and can be downloaded [here](https://github.com/OpenMS/streamlit-template/actions/workflows/build-windows-executable-app.yaml). +Select the latest successfull run and download the zip file from the artifacts section, while signed in to GitHub. + +## Python + +Clone the [streamlit-template repository](https://github.com/OpenMS/streamlit-template). It includes files to install dependencies via pip or conda. + +### via pip in an existing Python environment + +To install all required depdencies via pip in an already existing Python environment, run the following command in the terminal: + +`pip install -r requirements.txt` + +### create new environment via conda/mamba + +Create and activate the conda environment: + +`conda env create -f environment.yml` + +`conda activate streamlit-env` + +### run the app + +Run the app via streamlit command in the terminal with or without *local* mode (default is *online* mode). Learn more about *local* and *online* mode in the documentation page πŸ“– **OpenMS Template App**. + +`streamlit run app.py [local]` + +## Docker + +This repository contains two Dockerfiles. + +1. `Dockerfile`: This Dockerfile builds all dependencies for the app including Python packages and the OpenMS TOPP tools. Recommended for more complex workflows where you want to use the OpenMS TOPP tools for instance with the **TOPP Workflow Framework**. +2. `Dockerfile_simple`: This Dockerfile builds only the Python packages. Recommended for simple apps using pyOpenMS only. + +""") \ No newline at end of file diff --git "a/pages/0_\360\237\223\201_File_Upload.py" "b/pages/10_\360\237\223\201_File_Upload.py" similarity index 100% rename from "pages/0_\360\237\223\201_File_Upload.py" rename to "pages/10_\360\237\223\201_File_Upload.py" diff --git "a/pages/1_\360\237\221\200_View_Raw_Data.py" "b/pages/11_\360\237\221\200_View_Raw_Data.py" similarity index 100% rename from "pages/1_\360\237\221\200_View_Raw_Data.py" rename to "pages/11_\360\237\221\200_View_Raw_Data.py" diff --git a/pages/2_Simple_Workflow.py b/pages/12_Simple_Workflow.py similarity index 100% rename from pages/2_Simple_Workflow.py rename to pages/12_Simple_Workflow.py diff --git a/pages/4_Run_subprocess.py b/pages/13_Run_subprocess.py similarity index 100% rename from pages/4_Run_subprocess.py rename to pages/13_Run_subprocess.py diff --git a/pages/5_TOPP-Workflow.py b/pages/14_TOPP-Workflow.py similarity index 100% rename from pages/5_TOPP-Workflow.py rename to pages/14_TOPP-Workflow.py diff --git a/pages/3_Workflow_with_mzML_files.py b/pages/15_Workflow_with_mzML_files.py similarity index 96% rename from pages/3_Workflow_with_mzML_files.py rename to pages/15_Workflow_with_mzML_files.py index 5b7a240..7bbaf52 100755 --- a/pages/3_Workflow_with_mzML_files.py +++ b/pages/15_Workflow_with_mzML_files.py @@ -5,7 +5,7 @@ from pathlib import Path from src.common import page_setup, save_params, show_fig, show_table -from src import complexworkflow +from src import mzmlfileworkflow from src.captcha_ import captcha_control @@ -48,7 +48,7 @@ if run_workflow_button: params = save_params(params) if params["example-workflow-selected-mzML-files"]: - complexworkflow.run_workflow(params, result_dir) + mzmlfileworkflow.run_workflow(params, result_dir) else: st.warning("Select some mzML files.") diff --git "a/pages/1_\360\237\223\226_User_Guide.py" "b/pages/1_\360\237\223\226_User_Guide.py" new file mode 100644 index 0000000..dc3bd6a --- /dev/null +++ "b/pages/1_\360\237\223\226_User_Guide.py" @@ -0,0 +1,52 @@ +import streamlit as st +from src.common import page_setup + +page_setup() + +st.markdown(""" +# User Guide + +Welcome to the OpenMS Streamlit Web Application! This guide will help you understand how to use our tools effectively. + +## Advantages of OpenMS Web Apps + +OpenMS web applications provide a user-friendly interface for accessing the powerful features of OpenMS. Here are a few advantages: +- **Accessibility**: Access powerful OpenMS algorithms and TOPP tools from any device with a web browser. +- **Ease of Use**: Simplified user interface makes it easy for both beginners and experts to perform complex analyses. +- **No Installation Required**: Use the tools without the need to install OpenMS locally, saving time and system resources. + +## Workspaces + +In the OpenMS web application, workspaces are designed to keep your analysis organized: +- **Workspace Specific Parameters and Files**: Each workspace stores parameters and files (uploaded input files and results from workflows). +- **Persistence**: Your workspaces and parameters are saved, so you can return to your analysis anytime and pick up where you left off. + +## Online and Local Mode Differences + +There are a few key differences between operating in online and local modes: +- **File Uploads**: + - *Online Mode*: You can upload only one file at a time. This helps manage server load and optimizes performance. + - *Local Mode*: Multiple file uploads are supported, giving you flexibility when working with large datasets. +- **Workspace Access**: + - In online mode, workspaces are stored temporarily and will be cleared after seven days of inactivity. + - In local mode, workspaces are saved on your local machine, allowing for persistent storage. + +## Downloading Results + +You can download the results of your analyses, including figures and tables, directly from the application: +- **Figures**: Click the camera icon button, appearing while hovering on the top right corner of the figure. Set the desired image format in the settings panel in the side bar. +- **Tables**: Use the download button to save tables in *csv* format, appearing while hovering on the top right corner of the table. + +## Getting Started + +To get started: +1. Select or create a new workspace. +2. Upload your data file. +3. Set the necessary parameters for your analysis. +4. Run the analysis. +5. View and download your results. + +For more detailed information on each step, refer to the specific sections of this guide. +""") + + diff --git "a/pages/2_\360\237\223\226_Build_App.py" "b/pages/2_\360\237\223\226_Build_App.py" new file mode 100644 index 0000000..bf00539 --- /dev/null +++ "b/pages/2_\360\237\223\226_Build_App.py" @@ -0,0 +1,83 @@ +import streamlit as st + +from src.common import page_setup + +page_setup() + +st.markdown(""" +# Build your own app based on this template + +## App layout + +- *Main page* contains explanatory text on how to use the app and a workspace selector. `app.py` +- *Pages* can be navigated via *Sidebar*. Sidebar also contains the OpenMS logo, settings panel and a workspace indicator. The *main page* contains a workspace selector as well. +- See *pages* in the template app for example use cases. The content of this app serves as a documentation. + +## Key concepts + +- **Workspaces** +: Directories where all data is generated and uploaded can be stored as well as a workspace specific parameter file. +- **Run the app locally and online** +: Launching the app with the `local` argument lets the user create/remove workspaces. In the online the user gets a workspace with a specific ID. +- **Parameters** +: Parameters (defaults in `assets/default-params.json`) store changing parameters for each workspace. Parameters are loaded via the page_setup function at the start of each page. To track a widget variable via parameters simply give them a key and add a matching entry in the default parameters file. Initialize a widget value from the params dictionary. + +```python +params = page_setup() + +st.number_input(label="x dimension", min_value=1, max_value=20, +value=params["example-y-dimension"], step=1, key="example-y-dimension") + +save_params() +``` + +## Code structure + +- **Pages** must be placed in the `pages` directory. +- It is recommended to use a separate file for defining functions per page in the `src` directory. +- The `src/common.py` file contains a set of useful functions for common use (e.g. rendering a table with download button). + +## Modify the template to build your own app + +1. In `src/common.py`, update the name of your app and the repository name + ```python + APP_NAME = "OpenMS Streamlit App" + REPOSITORY_NAME = "streamlit-template" + ``` +2. In `clean-up-workspaces.py`, update the name of the workspaces directory to `/workspaces-` + ```python + workspaces_directory = Path("/workspaces-streamlit-template") + ``` +3. Update `README.md` accordingly + + +**Dockerfile-related** +1. Choose one of the Dockerfiles depending on your use case: + - `Dockerfile` builds OpenMS including TOPP tools + - `Dockerfile_simple` uses pyOpenMS only +2. Update the Dockerfile: + - with the `GITHUB_USER` owning the Streamlit app repository + - with the `GITHUB_REPO` name of the Streamlit app repository + - if your main page Python file is not called `app.py`, modify the following line + ```dockerfile + RUN echo "mamba run --no-capture-output -n streamlit-env streamlit run app.py" >> /app/entrypoint.sh + ``` +3. Update Python package dependency files: + - `requirements.txt` if using `Dockerfile_simple` + - `environment.yml` if using `Dockerfile` + +**for the Windows executable package** +1. Update `datas` in `run_app_temp.spec` with the Python packages required for your app +2. Update main Streamlit file name to run in `run_app.py` + +## How to build a workflow + +### Simple workflow using pyOpenMS + +Take a look at the example pages `Simple Workflow` or `Workflow with mzML files` for examples (on the *sidebar*). Put Streamlit logic inside the pages and call the functions with workflow logic from from the `src` directory (for our examples `src/simple_workflow.py` and `src/mzmlfileworkflow.py`). + +### Complex workflow using TOPP tools + +This template app features a module in `src/workflow` that allows for complex and long workflows to be built very efficiently. Check out the `TOPP Workflow Framework` page for more information (on the *sidebar*). +""") + diff --git "a/pages/6_\360\237\223\226_TOPP-Workflow_Docs.py" "b/pages/3_\360\237\223\226_TOPP_Workflow_Framework.py" similarity index 63% rename from "pages/6_\360\237\223\226_TOPP-Workflow_Docs.py" rename to "pages/3_\360\237\223\226_TOPP_Workflow_Framework.py" index e3405f2..677f02c 100644 --- "a/pages/6_\360\237\223\226_TOPP-Workflow_Docs.py" +++ "b/pages/3_\360\237\223\226_TOPP_Workflow_Framework.py" @@ -10,7 +10,7 @@ wf = Workflow() -st.title("πŸ“– Workflow Framework Docs") +st.title("πŸ“– TOPP Workflow Framework Documentation") st.markdown( """ @@ -23,24 +23,9 @@ - workflow output updates automatically in short intervalls - user can leave the app and return to the running workflow at any time - quickly build a workflow with multiple steps channelling files between steps -# """ ) -with st.expander("**Example User Interface**", True): - t = st.tabs(["πŸ“ **File Upload**", "βš™οΈ **Configure**", "πŸš€ **Run**", "πŸ“Š **Results**"]) - with t[0]: - wf.show_file_upload_section() - - with t[1]: - wf.show_parameter_section() - - with t[2]: - wf.show_execution_section() - - with t[3]: - wf.show_results_section() - st.markdown( """ ## Quickstart @@ -117,26 +102,26 @@ **3. Choose `self.ui.input_python` to automatically generate complete input sections for a custom Python tool:** Takes the obligatory **script_file** argument. The default location for the Python script files is in `src/python-tools` (in this case the `.py` file extension is optional in the **script_file** argument), however, any other path can be specified as well. Parameters need to be specified in the Python script in the **DEFAULTS** variable with the mandatory **key** and **value** parameters. - -Here are the options to use as dictionary keys for parameter definitions (see `src/python-tools/example.py` for an example): - -Mandatory keys for each parameter -- **key:** a unique identifier -- **value:** the default value - -Optional keys for each parameter -- **name:** the name of the parameter -- **hide:** don't show the parameter in the parameter section (e.g. for **input/output files**) -- **options:** a list of valid options for the parameter -- **min:** the minimum value for the parameter (int and float) -- **max:** the maximum value for the parameter (int and float) -- **step_size:** the step size for the parameter (int and float) -- **help:** a description of the parameter -- **widget_type:** the type of widget to use for the parameter (default: auto) -- **advanced:** whether or not the parameter is advanced (default: False) - """) +with st.expander("Options to use as dictionary keys for parameter definitions (see `src/python-tools/example.py` for an example)"): + st.markdown(""" +**Mandatory** keys for each parameter +- *key:* a unique identifier +- *value:* the default value + +**Optional** keys for each parameter +- *name:* the name of the parameter +- *hide:* don't show the parameter in the parameter section (e.g. for **input/output files**) +- *options:* a list of valid options for the parameter +- *min:* the minimum value for the parameter (int and float) +- *max:* the maximum value for the parameter (int and float) +- *step_size:* the step size for the parameter (int and float) +- *help:* a description of the parameter +- *widget_type:* the type of widget to use for the parameter (default: auto) +- *advanced:* whether or not the parameter is advanced (default: False) +""") + st.code( getsource(Workflow.configure) ) @@ -201,7 +186,7 @@ """) st.code(""" -self.executor.run_command(["command", "arg1", "arg2", ...], write_log=True) +self.executor.run_command(["command", "arg1", "arg2", ...]) """) st.markdown( @@ -266,134 +251,4 @@ st.help(CommandExecutor.run_command) st.help(CommandExecutor.run_multiple_commands) st.help(CommandExecutor.run_topp) - st.help(CommandExecutor.run_python) - -with st.expander("**Example output of the complete example workflow**"): - st.code(""" -STARTING WORKFLOW - -Number of input mzML files: 2 - -Running 2 commands in parallel... - -Running command: -FeatureFinderMetabo -in ../workspaces-streamlit-template/default/topp-workflow/input-files/mzML-files/Treatment.mzML -out ../workspaces-streamlit-template/default/topp-workflow/results/feature-detection/Treatment.featureXML -algorithm:common:chrom_peak_snr 4.0 -algorithm:common:noise_threshold_int 1000.0 -Waiting for command to finish... - -Running command: -FeatureFinderMetabo -in ../workspaces-streamlit-template/default/topp-workflow/input-files/mzML-files/Control.mzML -out ../workspaces-streamlit-template/default/topp-workflow/results/feature-detection/Control.featureXML -algorithm:common:chrom_peak_snr 4.0 -algorithm:common:noise_threshold_int 1000.0 -Waiting for command to finish... - -Process finished: -FeatureFinderMetabo -in ../workspaces-streamlit-template/default/topp-workflow/input-files/mzML-files/Treatment.mzML -out ../workspaces-streamlit-template/default/topp-workflow/results/feature-detection/Treatment.featureXML -algorithm:common:chrom_peak_snr 4.0 -algorithm:common:noise_threshold_int 1000.0 -Total time to run command: 0.55 seconds - -Progress of 'loading mzML': - Progress of 'loading spectra list': - - 89.06 % - -- done [took 0.17 s (CPU), 0.17 s (Wall)] -- - Progress of 'loading chromatogram list': - - -- done [took 0.00 s (CPU), 0.00 s (Wall)] -- - --- done [took 0.18 s (CPU), 0.18 s (Wall) @ 40.66 MiB/s] -- -Progress of 'mass trace detection': - --- done [took 0.01 s (CPU), 0.01 s (Wall)] -- -Progress of 'elution peak detection': - --- done [took 0.07 s (CPU), 0.07 s (Wall)] -- -Progress of 'assembling mass traces to features': -Loading metabolite isotope model with 5% RMS error - --- done [took 0.04 s (CPU), 0.04 s (Wall)] -- --- FF-Metabo stats -- -Input traces: 1382 -Output features: 1095 (total trace count: 1382) -FeatureFinderMetabo took 0.47 s (wall), 0.90 s (CPU), 0.43 s (system), 0.47 s (user); Peak Memory Usage: 88 MB. - - -Process finished: -FeatureFinderMetabo -in ../workspaces-streamlit-template/default/topp-workflow/input-files/mzML-files/Control.mzML -out ../workspaces-streamlit-template/default/topp-workflow/results/feature-detection/Control.featureXML -algorithm:common:chrom_peak_snr 4.0 -algorithm:common:noise_threshold_int 1000.0 -Total time to run command: 0.60 seconds - -Progress of 'loading mzML': - Progress of 'loading spectra list': - - 77.09 % - -- done [took 0.16 s (CPU), 0.16 s (Wall)] -- - Progress of 'loading chromatogram list': - - -- done [took 0.00 s (CPU), 0.00 s (Wall)] -- - --- done [took 0.17 s (CPU), 0.17 s (Wall) @ 43.38 MiB/s] -- -Progress of 'mass trace detection': - --- done [took 0.02 s (CPU), 0.02 s (Wall)] -- -Progress of 'elution peak detection': - --- done [took 0.07 s (CPU), 0.07 s (Wall)] -- -Progress of 'assembling mass traces to features': -Loading metabolite isotope model with 5% RMS error - --- done [took 0.05 s (CPU), 0.05 s (Wall)] -- --- FF-Metabo stats -- -Input traces: 1521 -Output features: 1203 (total trace count: 1521) -FeatureFinderMetabo took 0.51 s (wall), 0.90 s (CPU), 0.45 s (system), 0.45 s (user); Peak Memory Usage: 88 MB. - - -Total time to run 2 commands: 0.60 seconds - -Running command: -python src/python-tools/example.py ../workspaces-streamlit-template/default/topp-workflow/example.json -Waiting for command to finish... - -Process finished: -python src/python-tools/example.py ../workspaces-streamlit-template/default/topp-workflow/example.json -Total time to run command: 0.04 seconds - -Writing stdout which will get logged... -Parameters for this example Python tool: -{ - "in": [ - "../workspaces-streamlit-template/default/topp-workflow/input-files/mzML-files/Control.mzML", - "../workspaces-streamlit-template/default/topp-workflow/input-files/mzML-files/Treatment.mzML" - ], - "out": [], - "number-slider": 6, - "selectbox-example": "c", - "adavanced-input": 5, - "checkbox": true -} - - -Running command: -SiriusExport -in ../workspaces-streamlit-template/default/topp-workflow/input-files/mzML-files/Control.mzML ../workspaces-streamlit-template/default/topp-workflow/input-files/mzML-files/Treatment.mzML -in_featureinfo ../workspaces-streamlit-template/default/topp-workflow/results/feature-detection/Control.featureXML ../workspaces-streamlit-template/default/topp-workflow/results/feature-detection/Treatment.featureXML -out ../workspaces-streamlit-template/default/topp-workflow/results/sirius-export/sirius.ms -Waiting for command to finish... - -Process finished: -SiriusExport -in ../workspaces-streamlit-template/default/topp-workflow/input-files/mzML-files/Control.mzML ../workspaces-streamlit-template/default/topp-workflow/input-files/mzML-files/Treatment.mzML -in_featureinfo ../workspaces-streamlit-template/default/topp-workflow/results/feature-detection/Control.featureXML ../workspaces-streamlit-template/default/topp-workflow/results/feature-detection/Treatment.featureXML -out ../workspaces-streamlit-template/default/topp-workflow/results/sirius-export/sirius.ms -Total time to run command: 0.65 seconds - -Number of features to be processed: 0 -Number of additional MS2 spectra to be processed: 0 -No MS1 spectrum for this precursor. Occurred 0 times. -0 spectra were skipped due to precursor charge below -1 and above +1. -Mono charge assumed and set to charge 1 with respect to current polarity 0 times. -0 features were skipped due to feature charge below -1 and above +1. -No MS1 spectrum for this precursor. Occurred 0 times. -0 spectra were skipped due to precursor charge below -1 and above +1. -Mono charge assumed and set to charge 1 with respect to current polarity 0 times. -0 features were skipped due to feature charge below -1 and above +1. - occurred 2 times -SiriusExport took 0.61 s (wall), 1.71 s (CPU), 1.06 s (system), 0.65 s (user); Peak Memory Usage: 88 MB. - occurred 2 times - - -WORKFLOW FINISHED - """, language="neon") - - - + st.help(CommandExecutor.run_python) \ No newline at end of file diff --git "a/pages/4_\360\237\223\226_Deployment.py" "b/pages/4_\360\237\223\226_Deployment.py" new file mode 100644 index 0000000..1a857be --- /dev/null +++ "b/pages/4_\360\237\223\226_Deployment.py" @@ -0,0 +1,15 @@ +import streamlit as st +import requests + +from src.common import page_setup + +page_setup() + +url = "https://raw.githubusercontent.com/OpenMS/streamlit-deployment/main/README.md" + +response = requests.get(url) + +if response.status_code == 200: + st.markdown(response.text) # or process the content as needed +else: + st.warning("Failed to get README from streamlit-deployment repository.") \ No newline at end of file diff --git a/src/Workflow.py b/src/Workflow.py index 0cc3ccd..a403eb5 100644 --- a/src/Workflow.py +++ b/src/Workflow.py @@ -27,7 +27,7 @@ def configure(self) -> None: ) with t[0]: # Parameters for FeatureFinderMetabo TOPP tool. - self.ui.input_TOPP("FeatureFinderMetabo") + self.ui.input_TOPP("FeatureFinderMetabo", custom_defaults={"algorithm:common:noise_threshold_int": 1000.0}) with t[1]: # A single checkbox widget for workflow logic. self.ui.input_widget("run-adduct-detection", False, "Adduct Detection") @@ -42,8 +42,12 @@ def configure(self) -> None: self.ui.input_python("example") def execution(self) -> None: - # Get mzML input files from self.params. - # Can be done without file manager, however, it ensures everything is correct. + # Any parameter checks, here simply checking if mzML files are selected + if not self.params["mzML-files"]: + self.logger.log("ERROR: No mzML files selected.") + return + + # Get mzML files with FileManager in_mzML = self.file_manager.get_files(self.params["mzML-files"]) # Log any messages. @@ -63,7 +67,7 @@ def execution(self) -> None: # Run MetaboliteAdductDecharger for adduct detection, with disabled logs. # Without a new file list for output, the input files will be overwritten in this case. self.executor.run_topp( - "MetaboliteAdductDecharger", {"in": out_ffm, "out_fm": out_ffm}, write_log=False + "MetaboliteAdductDecharger", {"in": out_ffm, "out_fm": out_ffm} ) # Example for a custom Python tool, which is located in src/python-tools. diff --git a/src/complexworkflow.py b/src/mzmlfileworkflow.py similarity index 100% rename from src/complexworkflow.py rename to src/mzmlfileworkflow.py diff --git a/src/python-tools/example.py b/src/python-tools/example.py index 21431dc..50a7b47 100644 --- a/src/python-tools/example.py +++ b/src/python-tools/example.py @@ -35,18 +35,20 @@ }, { "key": "selectbox-example", + "name": "select something", "value": "a", "options": ["a", "b", "c"], }, { "key": "adavanced-input", + "name": "advanced parameter", "value": 5, "step_size": 5, "help": "An advanced example parameter.", "advanced": True, }, { - "key": "checkbox", "value": True + "key": "checkbox", "value": True, "name": "boolean" } ] diff --git a/src/workflow/CommandExecutor.py b/src/workflow/CommandExecutor.py index 8e33cd7..6cc4930 100644 --- a/src/workflow/CommandExecutor.py +++ b/src/workflow/CommandExecutor.py @@ -26,7 +26,7 @@ def __init__(self, workflow_dir: Path, logger: Logger, parameter_manager: Parame self.parameter_manager = parameter_manager def run_multiple_commands( - self, commands: list[str], write_log: bool = True + self, commands: list[str] ) -> None: """ Executes multiple shell commands concurrently in separate threads. @@ -38,10 +38,9 @@ def run_multiple_commands( Args: commands (list[str]): A list where each element is a list representing a command and its arguments. - write_log (bool): If True, logs the execution details and outcomes of the commands. """ # Log the start of command execution - self.logger.log(f"Running {len(commands)} commands in parallel...") + self.logger.log(f"Running {len(commands)} commands in parallel...", 1) start_time = time.time() # Initialize a list to keep track of threads @@ -49,7 +48,7 @@ def run_multiple_commands( # Start a new thread for each command for cmd in commands: - thread = threading.Thread(target=self.run_command, args=(cmd, write_log)) + thread = threading.Thread(target=self.run_command, args=(cmd,)) thread.start() threads.append(thread) @@ -59,27 +58,23 @@ def run_multiple_commands( # Calculate and log the total execution time end_time = time.time() - self.logger.log( - f"Total time to run {len(commands)} commands: {end_time - start_time:.2f} seconds" - ) + self.logger.log(f"Total time to run {len(commands)} commands: {end_time - start_time:.2f} seconds", 1) - def run_command(self, command: list[str], write_log: bool = True) -> None: + def run_command(self, command: list[str]) -> None: """ Executes a specified shell command and logs its execution details. Args: command (list[str]): The shell command to execute, provided as a list of strings. - write_log (bool): If True, logs the command's output and errors. Raises: Exception: If the command execution results in any errors. """ # Ensure all command parts are strings command = [str(c) for c in command] - + # Log the execution start - self.logger.log(f"Running command:\n"+' '.join(command)+"\nWaiting for command to finish...") - + self.logger.log(f"Running command:\n"+' '.join(command)+"\nWaiting for command to finish...", 1) start_time = time.time() # Execute the command @@ -96,24 +91,22 @@ def run_command(self, command: list[str], write_log: bool = True) -> None: # Cleanup PID file pid_file_path.unlink() - + end_time = time.time() execution_time = end_time - start_time - # Format the logging prefix - self.logger.log(f"Process finished:\n"+' '.join(command)+f"\nTotal time to run command: {execution_time:.2f} seconds") + self.logger.log(f"Process finished:\n"+' '.join(command)+f"\nTotal time to run command: {execution_time:.2f} seconds", 1) # Log stdout if present - if stdout and write_log: - self.logger.log(stdout.decode()) + if stdout: + self.logger.log(stdout.decode(), 2) # Log stderr and raise an exception if errors occurred if stderr or process.returncode != 0: error_message = stderr.decode().strip() - self.logger.log(f"ERRORS OCCURRED:\n{error_message}") - raise Exception(f"Errors occurred while running command: {' '.join(command)}\n{error_message}") + self.logger.log(f"ERRORS OCCURRED:\n{error_message}", 2) - def run_topp(self, tool: str, input_output: dict, write_log: bool = True) -> None: + def run_topp(self, tool: str, input_output: dict, custom_params: dict = {}) -> None: """ Constructs and executes commands for the specified tool OpenMS TOPP tool based on the given input and output configurations. Ensures that all input/output file lists @@ -130,8 +123,8 @@ def run_topp(self, tool: str, input_output: dict, write_log: bool = True) -> Non Args: tool (str): The executable name or path of the tool. input_output (dict): A dictionary specifying the input/output parameter names (as key) and their corresponding file paths (as value). - write_log (bool): If True, enables logging of command execution details. - + custom_params (dict): A dictionary of custom parameters to pass to the tool. + Raises: ValueError: If the lengths of input/output file lists are inconsistent, except for single string inputs. @@ -173,14 +166,31 @@ def run_topp(self, tool: str, input_output: dict, write_log: bool = True) -> Non # Add non-default TOPP tool parameters if tool in params.keys(): for k, v in params[tool].items(): - command += [f"-{k}", str(v)] + command += [f"-{k}"] + if isinstance(v, str) and "\n" in v: + command += v.split("\n") + else: + command += [str(v)] + # Add custom parameters + for k, v in custom_params.items(): + command += [f"-{k}"] + if v: + if isinstance(v, list): + command += [str(x) for x in v] + else: + command += [str(v)] commands.append(command) + # check if a ini file has been written, if yes use it (contains custom defaults) + ini_path = Path(self.parameter_manager.ini_dir, tool + ".ini") + if ini_path.exists(): + command += ["-ini", str(ini_path)] + # Run command(s) if len(commands) == 1: - self.run_command(commands[0], write_log) + self.run_command(commands[0]) elif len(commands) > 1: - self.run_multiple_commands(commands, write_log) + self.run_multiple_commands(commands) else: raise Exception("No commands to execute.") @@ -200,7 +210,7 @@ def stop(self) -> None: shutil.rmtree(self.pid_dir, ignore_errors=True) self.logger.log("Workflow stopped.") - def run_python(self, script_file: str, input_output: dict = {}, write_log: bool = True) -> None: + def run_python(self, script_file: str, input_output: dict = {}) -> None: """ Executes a specified Python script with dynamic input and output parameters, optionally logging the execution process. The method identifies and loads @@ -217,8 +227,6 @@ def run_python(self, script_file: str, input_output: dict = {}, write_log: bool If the path is omitted, the method looks for the script in 'src/python-tools/'. The '.py' extension is appended if not present. input_output (dict, optional): A dictionary specifying the input/output parameter names (as key) and their corresponding file paths (as value). Defaults to {}. - write_log (bool, optional): If True, the execution process is logged. This - includes any output generated by the script as well as any errors. Defaults to True. """ # Check if script file exists (can be specified without path and extension) # default location: src/python-tools/script_file @@ -240,7 +248,7 @@ def run_python(self, script_file: str, input_output: dict = {}, write_log: bool if defaults is None: self.logger.log(f"WARNING: No DEFAULTS found in {path.name}") # run command without params - self.run_command(["python", str(path)], write_log) + self.run_command(["python", str(path)]) elif isinstance(defaults, list): defaults = {entry["key"]: entry["value"] for entry in defaults} # load paramters from JSON file @@ -255,6 +263,6 @@ def run_python(self, script_file: str, input_output: dict = {}, write_log: bool with open(tmp_params_file, "w", encoding="utf-8") as f: json.dump(defaults, f, indent=4) # run command - self.run_command(["python", str(path), str(tmp_params_file)], write_log) + self.run_command(["python", str(path), str(tmp_params_file)]) # remove tmp params file tmp_params_file.unlink() \ No newline at end of file diff --git a/src/workflow/FileManager.py b/src/workflow/FileManager.py index 923ff6b..e49ef3b 100644 --- a/src/workflow/FileManager.py +++ b/src/workflow/FileManager.py @@ -175,6 +175,5 @@ def _create_results_sub_dir(self, name: str = "") -> str: while Path(self.workflow_dir, "results", name).exists(): name = self._generate_random_code(4) path = Path(self.workflow_dir, "results", name) - shutil.rmtree(path, ignore_errors=True) - path.mkdir() + path.mkdir(exist_ok=True) return str(path) diff --git a/src/workflow/Logger.py b/src/workflow/Logger.py index 8529938..4d44426 100644 --- a/src/workflow/Logger.py +++ b/src/workflow/Logger.py @@ -2,7 +2,7 @@ class Logger: """ - A simple logging class for writing messages to a log file. This class is designed + A simple logging class for writing messages to a log file. input_widgetThis class is designed to append messages to a log file in the current workflow directory, facilitating easy tracking of events, errors, or other significant occurrences in processes called during workflow execution. @@ -12,9 +12,8 @@ class Logger: """ def __init__(self, workflow_dir: Path) -> None: self.workflow_dir = workflow_dir - self.log_file = Path(self.workflow_dir, "log.txt") - def log(self, message: str) -> None: + def log(self, message: str, level: int = 0) -> None: """ Appends a given message to the log file, followed by two newline characters for readability. This method ensures that each logged message is separated @@ -22,7 +21,22 @@ def log(self, message: str) -> None: Args: message (str): The message to be logged to the file. + level (int, optional): The level of importance of the message. Defaults to 0. """ + log_dir = Path(self.workflow_dir, "logs") + if not log_dir.exists(): + log_dir.mkdir() # Write the message to the log file. - with open(self.log_file, "a", encoding="utf-8") as f: - f.write(f"{message}\n\n") + if level == 0: + with open(Path(log_dir, "minimal.log"), "a", encoding="utf-8") as f: + f.write(f"{message}\n\n") + if level <= 1: + with open(Path(log_dir, "commands-and-run-times.log"), "a", encoding="utf-8") as f: + f.write(f"{message}\n\n") + if level <= 2: + with open(Path(log_dir, "all.log"), "a", encoding="utf-8") as f: + f.write(f"{message}\n\n") + # log_types = ["minimal", "commands and run times", "tool outputs", "all"] + # for i, log_type in enumerate(log_types): + # with open(Path(log_dir, f"{log_type.replace(" ", "-")}.log"), "a", encoding="utf-8") as f: + # f.write(f"{message}\n\n") \ No newline at end of file diff --git a/src/workflow/ParameterManager.py b/src/workflow/ParameterManager.py index b8106f2..5e993af 100644 --- a/src/workflow/ParameterManager.py +++ b/src/workflow/ParameterManager.py @@ -60,13 +60,6 @@ def save_parameters(self) -> None: ini_key = key.replace(self.topp_param_prefix, "").encode() # get ini (default) value by ini_key ini_value = param.getValue(ini_key) - # need to convert bool values to string values - if isinstance(value, bool): - value = "true" if value else "false" - # convert strings with newlines to list - if isinstance(value, str): - if "\n" in value: - value = [v.encode() for v in value.split("\n")] # check if value is different from default if ini_value != value: # store non-default value diff --git a/src/workflow/StreamlitUI.py b/src/workflow/StreamlitUI.py index d1ae8af..2124f8f 100644 --- a/src/workflow/StreamlitUI.py +++ b/src/workflow/StreamlitUI.py @@ -10,6 +10,7 @@ import time from io import BytesIO import zipfile +from datetime import datetime class StreamlitUI: """ @@ -358,8 +359,11 @@ def format_files(input: Any) -> List[str]: def input_TOPP( self, topp_tool_name: str, - num_cols: int = 3, + num_cols: int = 4, exclude_parameters: List[str] = [], + include_parameters: List[str] = [], + display_full_parameter_names: bool = False, + custom_defaults: dict = {}, ) -> None: """ Generates input widgets for TOPP tool parameters dynamically based on the tool's @@ -369,36 +373,48 @@ def input_TOPP( Args: topp_tool_name (str): The name of the TOPP tool for which to generate inputs. num_cols (int, optional): Number of columns to use for the layout. Defaults to 3. - exclude_parameters (List[str], optional): List of parameter names to exclude from the widget. + exclude_parameters (List[str], optional): List of parameter names to exclude from the widget. Defaults to an empty list. + include_parameters (List[str], optional): List of parameter names to include in the widget. Defaults to an empty list. + display_full_parameter_names (bool, optional): Whether to display the full parameter names. Defaults to True. + custom_defaults (dict, optional): Dictionary of custom defaults to use. Defaults to an empty dict. """ # write defaults ini files ini_file_path = Path(self.parameter_manager.ini_dir, f"{topp_tool_name}.ini") if not ini_file_path.exists(): subprocess.call([topp_tool_name, "-write_ini", str(ini_file_path)]) + # update custom defaults if necessary + if custom_defaults: + param = poms.Param() + poms.ParamXMLFile().load(str(ini_file_path), param) + for key, value in custom_defaults.items(): + encoded_key = f"{topp_tool_name}:1:{key}".encode() + if encoded_key in param.keys(): + param.setValue(encoded_key, value) + poms.ParamXMLFile().store(str(ini_file_path), param) + # read into Param object param = poms.Param() poms.ParamXMLFile().load(str(ini_file_path), param) - - excluded_keys = [ - "log", - "debug", - "threads", - "no_progress", - "force", - "version", - "test", - ] + exclude_parameters - - param_dicts = [] - for key in param.keys(): - # Determine if the parameter should be included based on the conditions - if ( - b"input file" in param.getTags(key) - or b"output file" in param.getTags(key) - ) or (key.decode().split(":")[-1] in excluded_keys): - continue + if include_parameters: + valid_keys = [key for key in param.keys() if any([k.encode() in key for k in include_parameters])] + else: + excluded_keys = [ + "log", + "debug", + "threads", + "no_progress", + "force", + "version", + "test", + ] + exclude_parameters + valid_keys = [key for key in param.keys() if not (b"input file" in param.getTags(key) + or b"output file" in param.getTags(key) + or any([k.encode() in key for k in excluded_keys]))] + + params_decoded = [] + for key in valid_keys: entry = param.getEntry(key) - param_dict = { + tmp = { "name": entry.name.decode(), "key": key, "value": entry.value, @@ -406,66 +422,75 @@ def input_TOPP( "description": entry.description.decode(), "advanced": (b"advanced" in param.getTags(key)), } - param_dicts.append(param_dict) - - # Update parameter values from the JSON parameters file - json_params = self.params - if topp_tool_name in json_params: - for p in param_dicts: - name = p["key"].decode().split(":1:")[1] - if name in json_params[topp_tool_name]: - p["value"] = json_params[topp_tool_name][name] + params_decoded.append(tmp) + + # for each parameter in params_decoded + # if a parameter with custom default value exists, use that value + # else check if the parameter is already in self.params, if yes take the value from self.params + for p in params_decoded: + name = p["key"].decode().split(":1:")[1] + if topp_tool_name in self.params: + if name in self.params[topp_tool_name]: + p["value"] = self.params[topp_tool_name][name] + elif name in custom_defaults: + p["value"] = custom_defaults[name] + elif name in custom_defaults: + p["value"] = custom_defaults[name] - # input widgets in n number of columns + # show input widgets cols = st.columns(num_cols) i = 0 - - # show input widgets - for p in param_dicts: - + + for p in params_decoded: # skip avdanced parameters if not selected if not st.session_state["advanced"] and p["advanced"]: continue key = f"{self.parameter_manager.topp_param_prefix}{p['key'].decode()}" - + if display_full_parameter_names: + name = key.split(":1:")[1].replace("algorithm:", "").replace(":", " : ") + else: + name = p["name"] try: + # # sometimes strings with newline, handle as list + if isinstance(p["value"], str) and "\n" in p["value"]: + p["value"] = p["value"].split("\n") # bools - if p["value"] == "true" or p["value"] == "false": + if isinstance(p["value"], bool): cols[i].markdown("##") cols[i].checkbox( - p["name"], - value=(p["value"] == "true"), - help=p["description"], - key=key, - ) - - # string options - elif isinstance(p["value"], str) and p["valid_strings"]: - cols[i].selectbox( - p["name"], - options=p["valid_strings"], - index=p["valid_strings"].index(p["value"]), + name, + value=(p["value"] == "true") if type(p["value"]) == str else p["value"], help=p["description"], key=key, ) # strings elif isinstance(p["value"], str): - cols[i].text_input( - p["name"], value=p["value"], help=p["description"], key=key - ) + # string options + if p["valid_strings"]: + cols[i].selectbox( + name, + options=p["valid_strings"], + index=p["valid_strings"].index(p["value"]), + help=p["description"], + key=key, + ) + else: + cols[i].text_input( + name, value=p["value"], help=p["description"], key=key + ) # ints elif isinstance(p["value"], int): cols[i].number_input( - p["name"], value=int(p["value"]), help=p["description"], key=key + name, value=int(p["value"]), help=p["description"], key=key ) # floats elif isinstance(p["value"], float): cols[i].number_input( - p["name"], + name, value=float(p["value"]), step=1.0, help=p["description"], @@ -478,7 +503,7 @@ def input_TOPP( v.decode() if isinstance(v, bytes) else v for v in p["value"] ] cols[i].text_area( - p["name"], + name, value="\n".join(p["value"]), help=p["description"], key=key, @@ -646,16 +671,16 @@ def parameter_section(self, custom_paramter_function) -> None: ) with form: - cols = st.columns(2) + cols = st.columns(4) - cols[0].form_submit_button( + cols[2].form_submit_button( label="Save parameters", on_click=self.parameter_manager.save_parameters, type="primary", use_container_width=True, ) - if cols[1].form_submit_button( + if cols[3].form_submit_button( label="Load default parameters", use_container_width=True ): self.parameter_manager.reset_to_default_parameters() @@ -665,26 +690,54 @@ def parameter_section(self, custom_paramter_function) -> None: self.parameter_manager.save_parameters() def execution_section(self, start_workflow_function) -> None: + # Display a summary of non-default TOPP paramters and all others (custom and python scripts) + summary_text = "" + for key, value in self.params.items(): + if not isinstance(value, dict): + summary_text += f""" + +{key}: **{value}** +""" + elif value: + summary_text += f""" +**{key}**: + +""" + for k, v in value.items(): + summary_text += f""" +{key}: **{v}** + +""" + with st.expander("**Parameter Summary**"): + st.markdown(summary_text) + + c1, c2 = st.columns(2) + # Select log level, this can be changed at run time or later without re-running the workflow + log_level = c1.selectbox("log details", ["minimal", "commands and run times", "all"], key="log_level") + c2.markdown("##") if self.executor.pid_dir.exists(): - if st.button("Stop Workflow", type="primary", use_container_width=True): + if c2.button("Stop Workflow", type="primary", use_container_width=True): self.executor.stop() st.rerun() elif st.button("Start Workflow", type="primary", use_container_width=True): start_workflow_function() - time.sleep(2) st.rerun() - - if self.logger.log_file.exists(): + log_path = Path(self.workflow_dir, "logs", log_level.replace(" ", "-") + ".log") + if log_path.exists(): if self.executor.pid_dir.exists(): with st.spinner("**Workflow running...**"): - with open(self.logger.log_file, "r", encoding="utf-8") as f: + with open(log_path, "r", encoding="utf-8") as f: st.code(f.read(), language="neon", line_numbers=True) time.sleep(2) st.rerun() else: - st.markdown("**Workflow log file**") - with open(self.logger.log_file, "r", encoding="utf-8") as f: - st.code(f.read(), language="neon", line_numbers=True) + st.markdown(f"**Workflow log file: {datetime.fromtimestamp(log_path.stat().st_ctime).strftime('%Y-%m-%d %H:%M')} CET**") + with open(log_path, "r", encoding="utf-8") as f: + content = f.read() + # Check if workflow finished successfully + if not "WORKFLOW FINISHED" in content: + st.error("**Errors occured, check log file.**") + st.code(content, language="neon", line_numbers=True) def results_section(self, custom_results_function) -> None: custom_results_function() diff --git a/src/workflow/WorkflowManager.py b/src/workflow/WorkflowManager.py index 3f70097..13af2fa 100644 --- a/src/workflow/WorkflowManager.py +++ b/src/workflow/WorkflowManager.py @@ -25,7 +25,7 @@ def start_workflow(self) -> None: The workflow itself needs to be a process, otherwise streamlit will wait for everything to finish before updating the UI again. """ # Delete the log file if it already exists - self.logger.log_file.unlink(missing_ok=True) + shutil.rmtree(Path(self.workflow_dir, "logs"), ignore_errors=True) # Start workflow process workflow_process = multiprocessing.Process(target=self.workflow_process) workflow_process.start() diff --git a/test.py b/test.py index 58e36ca..8a2a3ad 100644 --- a/test.py +++ b/test.py @@ -3,7 +3,7 @@ from urllib.request import urlretrieve from src.simpleworkflow import generate_random_table -from src.complexworkflow import mzML_file_get_num_spectra +from src.mzmlfileworkflow import mzML_file_get_num_spectra from pathlib import Path