diff --git a/.buildinfo b/.buildinfo index 61dfd87..f889448 100644 --- a/.buildinfo +++ b/.buildinfo @@ -1,4 +1,4 @@ # Sphinx build info version 1 # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. -config: 878c152044dfb5049531fd6e897ecbff +config: 1b1501f97a74dce1b0ae5f6f0c31f428 tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/.doctrees/data_analysis/HPC-module-SLEAP.doctree b/.doctrees/data_analysis/HPC-module-SLEAP.doctree deleted file mode 100644 index 28a16e5..0000000 Binary files a/.doctrees/data_analysis/HPC-module-SLEAP.doctree and /dev/null differ diff --git a/.doctrees/data_analysis/index.doctree b/.doctrees/data_analysis/index.doctree index 5fe5fb6..9a6380c 100644 Binary files a/.doctrees/data_analysis/index.doctree and b/.doctrees/data_analysis/index.doctree differ diff --git a/.doctrees/environment.pickle b/.doctrees/environment.pickle index ca548e0..e466c8f 100644 Binary files a/.doctrees/environment.pickle and b/.doctrees/environment.pickle differ diff --git a/.doctrees/programming/Mount-ceph-ubuntu-temp.doctree b/.doctrees/programming/Mount-ceph-ubuntu-temp.doctree new file mode 100644 index 0000000..d2c8249 Binary files /dev/null and b/.doctrees/programming/Mount-ceph-ubuntu-temp.doctree differ diff --git a/.doctrees/programming/Mount-ceph-ubuntu.doctree b/.doctrees/programming/Mount-ceph-ubuntu.doctree index dd556f5..c6bf7a6 100644 Binary files a/.doctrees/programming/Mount-ceph-ubuntu.doctree and b/.doctrees/programming/Mount-ceph-ubuntu.doctree differ diff --git a/.doctrees/programming/SLURM-arguments.doctree b/.doctrees/programming/SLURM-arguments.doctree deleted file mode 100644 index 6ff0a3b..0000000 Binary files a/.doctrees/programming/SLURM-arguments.doctree and /dev/null differ diff --git a/.doctrees/programming/SSH-SWC-cluster.doctree b/.doctrees/programming/SSH-SWC-cluster.doctree index 39831b4..1aec041 100644 Binary files a/.doctrees/programming/SSH-SWC-cluster.doctree and b/.doctrees/programming/SSH-SWC-cluster.doctree differ diff --git a/.doctrees/programming/index.doctree b/.doctrees/programming/index.doctree index d318292..724141d 100644 Binary files a/.doctrees/programming/index.doctree and b/.doctrees/programming/index.doctree differ diff --git a/_sources/data_analysis/HPC-module-SLEAP.md.txt b/_sources/data_analysis/HPC-module-SLEAP.md.txt deleted file mode 100644 index 94378a8..0000000 --- a/_sources/data_analysis/HPC-module-SLEAP.md.txt +++ /dev/null @@ -1,598 +0,0 @@ -# Use the SLEAP module on the HPC cluster - -```{include} ../_static/swc-wiki-warning.md -``` - -```{include} ../_static/code-blocks-note.md -``` - -## Abbreviations -| Acronym | Meaning | -| ------- | -------------------------------------------- | -| SLEAP | Social LEAP Estimates Animal Poses | -| SWC | Sainsbury Wellcome Centre | -| HPC | High Performance Computing | -| SLURM | Simple Linux Utility for Resource Management | -| GUI | Graphical User Interface | - -## Prerequisites - -### Access to the HPC cluster -Verify that you can access HPC gateway node (typing your `` both times when prompted): -```{code-block} console -$ ssh @ssh.swc.ucl.ac.uk -$ ssh hpc-gw1 -``` -To learn more about accessing the HPC via SSH, see the [relevant how-to guide](ssh-cluster-target). - -### Access to the SLEAP module -Once you are on the HPC gateway node, SLEAP should be listed among the available modules when you run `module avail`: - -```{code-block} console -$ module avail -... -SLEAP/2023-03-13 -SLEAP/2023-08-01 -... -``` -- `SLEAP/2023-03-13` corresponds to `sleap v.1.2.9` -- `SLEAP/2023-08-01` corresponds to `sleap v.1.3.1` - -We recommend always using the latest version, which is the one loaded by default -when you run `module load SLEAP`. If you want to load a specific version, -you can do so by typing the full module name, -including the date e.g. `module load SLEAP/2023-03-13`. - -If a module has been successfully loaded, it will be listed when you run `module list`, -along with other modules it may depend on: - -```{code-block} console -$ module list -Currently Loaded Modulefiles: - 1) cuda/11.8 2) SLEAP/2023-08-01 -``` - -If you have troubles with loading the SLEAP module, -see this guide's [Troubleshooting section](#problems-with-the-sleap-module). - - -### Install SLEAP on your local PC/laptop -While you can delegate the GPU-intensive work to the HPC cluster, -you will need to use the SLEAP GUI for some steps, such as labelling frames. -Thus, you also need to install SLEAP on your local PC/laptop. - -We recommend following the official [SLEAP installation guide](https://sleap.ai/installation.html). If you already have `conda` installed, you may skip the `mamba` installation steps and opt for installing the `libmamba-solver` for `conda`: - -```{code-block} console -$ conda install -n base conda-libmamba-solver -$ conda config --set solver libmamba -``` -This will get you the much faster dependency resolution that `mamba` provides, without having to install `mamba` itself. -From `conda` version 23.10 onwards (released in November 2023), `libmamba-solver` [is anyway the default](https://conda.org/blog/2023-11-06-conda-23-10-0-release/). - -After that, you can follow the [rest of the SLEAP installation guide](https://sleap.ai/installation.html#conda-package), substituting `conda` for `mamba` in the relevant commands. - -::::{tab-set} - -:::{tab-item} Windows and Linux -```{code-block} console -$ conda create -y -n sleap -c conda-forge -c nvidia -c sleap -c anaconda sleap=1.3.1 -``` -::: - -:::{tab-item} MacOS X and Apple Silicon -```{code-block} console -$ conda create -y -n sleap -c conda-forge -c anaconda -c sleap sleap=1.3.1 -``` -::: - -:::: - -You may exchange `sleap=1.3.1` for other versions. To be on the safe side, ensure that your local installation version matches (or is at least close to) the one installed in the cluster module. - -### Mount the SWC filesystem on your local PC/laptop -The rest of this guide assumes that you have mounted the SWC filesystem on your local PC/laptop. -If you have not done so, please follow the relevant instructions on the -[SWC internal wiki](https://wiki.ucl.ac.uk/display/SSC/SWC+Storage+Platform+Overview). - -We will also assume that the data you are working with are stored in a `ceph` or `winstor` -directory to which you have access to. In the rest of this guide, we will use the path -`/ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data` which contains a SLEAP project -for test purposes. You should replace this with the path to your own data. - -:::{dropdown} Data storage location matters -:color: warning -:icon: alert-fill - -The cluster has fast access to data stored on the `ceph` and `winstor` filesystems. -If your data is stored elsewhere, make sure to transfer it to `ceph` or `winstor` -before running the job. You can use tools such as [`rsync`](https://linux.die.net/man/1/rsync) -to copy data from your local machine to `ceph` via an ssh connection. For example: - -```{code-block} console -$ rsync -avz @ssh.swc.ucl.ac.uk:/ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data -``` -::: - -## Model training -This will consist of two parts - [preparing a training job](#prepare-the-training-job) -(on your local SLEAP installation) and [running a training job](#run-the-training-job) -(on the HPC cluster's SLEAP module). Some evaluation metrics for the trained models -can be [viewed via the SLEAP GUI](#evaluate-the-trained-models) on your local SLEAP installation. - -### Prepare the training job -Follow the SLEAP instructions for [Creating a Project](https://sleap.ai/tutorials/new-project.html) -and [Initial Labelling](https://sleap.ai/tutorials/initial-labeling.html). -Ensure that the project file (e.g. `labels.v001.slp`) is saved in the mounted SWC filesystem -(as opposed to your local filesystem). - -Next, follow the instructions in [Remote Training](https://sleap.ai/guides/remote.html#remote-training), -i.e. *Predict* -> *Run Training…* -> *Export Training Job Package…*. -- For selecting the right configuration parameters, see [Configuring Models](https://sleap.ai/guides/choosing-models.html#) and [Troubleshooting Workflows](https://sleap.ai/guides/troubleshooting-workflows.html) -- Set the *Predict On* parameter to *nothing*. Remote training and inference (prediction) are easiest to run separately on the HPC Cluster. Also unselect *Visualize Predictions During Training* in training settings, if it's enabled by default. -- If you are working with camera view from above or below (as opposed to a side view), set the *Rotation Min Angle* and *Rotation Max Angle* to -180 and 180 respectively in the *Augmentation* section. -- Make sure to save the exported training job package (e.g. `labels.v001.slp.training_job.zip`) in the mounted SWC filesystem, for example, in the same directory as the project file. -- Unzip the training job package. This will create a folder with the same name (minus the `.zip` extension). This folder contains everything needed to run the training job on the HPC cluster. - -### Run the training job -Login to the HPC cluster as described above. -```{code-block} console -$ ssh @ssh.swc.ucl.ac.uk -$ ssh hpc-gw1 -``` -Navigate to the training job folder (replace with your own path) and list its contents: -```{code-block} console -:emphasize-lines: 12 -$ cd /ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data -$ cd labels.v001.slp.training_job -$ ls -1 -centered_instance.json -centroid.json -inference-script.sh -jobs.yaml -labels.v001.pkg.slp -labels.v001.slp.predictions.slp -train_slurm.sh -swc-hpc-pose-estimation -train-script.sh -``` -There should be a `train-script.sh` file created by SLEAP, which already contains the -commands to run the training. You can see the contents of the file by running `cat train-script.sh`: -```{code-block} bash -:caption: labels.v001.slp.training_job/train-script.sh -:name: train-script-sh -:linenos: -#!/bin/bash -sleap-train centroid.json labels.v001.pkg.slp -sleap-train centered_instance.json labels.v001.pkg.slp -``` -The precise commands will depend on the model configuration you chose in SLEAP. -Here we see two separate training calls, one for the 'centroid' and another for -the 'centered_instance' model. That's because in this example we have chosen -the ['Top-Down'](https://sleap.ai/tutorials/initial-training.html#training-options) -configuration, which consists of two neural networks - the first for isolating -the animal instances (by finding their centroids) and the second for predicting -all the body parts per instance. - -![Top-Down model configuration](https://sleap.ai/_images/topdown_approach.jpg) - -:::{dropdown} More on 'Top-Down' vs 'Bottom-Up' models -:color: info -:icon: info - -Although the 'Top-Down' configuration was designed with multiple animals in mind, -it can also be used for single-animal videos. It makes sense to use it for videos -where the animal occupies a relatively small portion of the frame - see -[Troubleshooting Workflows](https://sleap.ai/guides/troubleshooting-workflows.html) for more info. -::: - -Next you need to create a SLURM batch script, which will schedule the training job -on the HPC cluster. Create a new file called `train_slurm.sh` -(you can do this in the terminal with `nano`/`vim` or in a text editor of -your choice on your local PC/laptop). Here we create the script in the same folder -as the training job, but you can save it anywhere you want, or even keep track of it with `git`. - -```{code-block} console -$ nano train_slurm.sh -``` - -An example is provided below, followed by explanations. -```{code-block} bash -:caption: train_slurm.sh -:name: train-slurm-sh -:linenos: -#!/bin/bash - -#SBATCH -J slp_train # job name -#SBATCH -p gpu # partition (queue) -#SBATCH -N 1 # number of nodes -#SBATCH --mem 16G # memory pool for all cores -#SBATCH -n 4 # number of cores -#SBATCH -t 0-06:00 # time (D-HH:MM) -#SBATCH --gres gpu:1 # request 1 GPU (of any kind) -#SBATCH -o slurm.%x.%N.%j.out # STDOUT -#SBATCH -e slurm.%x.%N.%j.err # STDERR -#SBATCH --mail-type=ALL -#SBATCH --mail-user=user@domain.com - -# Load the SLEAP module -module load SLEAP - -# Define directories for SLEAP project and exported training job -SLP_DIR=/ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data -SLP_JOB_NAME=labels.v001.slp.training_job -SLP_JOB_DIR=$SLP_DIR/$SLP_JOB_NAME - -# Go to the job directory -cd $SLP_JOB_DIR - -# Run the training script generated by SLEAP -./train-script.sh -``` - -In `nano`, you can save the file by pressing `Ctrl+O` and exit by pressing `Ctrl+X`. - -:::{dropdown} Explanation of the batch script -:color: info -:icon: info -- The `#SBATCH` lines are SLURM directives. They specify the resources needed -for the job, such as the number of nodes, CPUs, memory, etc. -A primer on the most useful SLURM arguments is provided in this [how-to guide](slurm-arguments-target). -For more information see the [SLURM documentation](https://slurm.schedmd.com/sbatch.html). - -- The `#` lines are comments. They are not executed by SLURM, but they are useful -for explaining the script to your future self and others. - -- The `module load SLEAP` line loads the latest SLEAP module and any other modules -it may depend on. - -- The `cd` line changes the working directory to the training job folder. -This is necessary because the `train-script.sh` file contains relative paths -to the model configuration and the project file. - -- The `./train-script.sh` line runs the training job (executes the contained commands). -::: - -:::{warning} -Before submitting the job, ensure that you have permissions to execute -both the batch script and the training script generated by SLEAP. -You can make these files executable by running in the terminal: - -```{code-block} console -$ chmod +x train-script.sh -$ chmod +x train_slurm.sh -``` - -If the scripts are not in your working directory, you will need to specify their full paths: - -```{code-block} console -$ chmod +x /path/to/train-script.sh -$ chmod +x /path/to/train_slurm.sh -``` -::: - -Now you can submit the batch script via running the following command -(in the same directory as the script): -```{code-block} console -$ sbatch train_slurm.sh -Submitted batch job 3445652 -``` - -You may monitor the progress of the job in various ways: - -::::{tab-set} - -:::{tab-item} squeue - -View the status of the queued/running jobs with [`squeue`](https://slurm.schedmd.com/squeue.html): - -```{code-block} console -$ squeue --me -JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) -3445652 gpu slp_train sirmpila R 23:11 1 gpu-sr670-20 -``` -::: - -:::{tab-item} sacct - -View status of running/completed jobs with [`sacct`](https://slurm.schedmd.com/sacct.html): - -```{code-block} console -$ sacct -JobID JobName Partition Account AllocCPUS State ExitCode ------------- ---------- ---------- ---------- ---------- ---------- -------- -3445652 slp_train gpu swc-ac 2 COMPLETED 0:0 -3445652.bat+ batch swc-ac 2 COMPLETED 0:0 -``` -Run `sacct` with some more helpful arguments. -For example, you can view jobs from the last 24 hours, displaying the time -elapsed and the peak memory usage in KB (MaxRSS): - -```{code-block} console -$ sacct \ - --starttime $(date -d '24 hours ago' +%Y-%m-%dT%H:%M:%S) \ - --endtime $(date +%Y-%m-%dT%H:%M:%S) \ - --format=JobID,JobName,Partition,State,Start,Elapsed,MaxRSS - -JobID JobName Partition State Start Elapsed MaxRSS ------------- ---------- ---------- ---------- ------------------- ---------- ---------- -4043595 slp_infer gpu FAILED 2023-10-10T18:14:31 00:00:35 -4043595.bat+ batch FAILED 2023-10-10T18:14:31 00:00:35 271104K -4043603 slp_infer gpu FAILED 2023-10-10T18:27:32 00:01:37 -4043603.bat+ batch FAILED 2023-10-10T18:27:32 00:01:37 423476K -4043611 slp_infer gpu PENDING Unknown 00:00:00 -``` -::: - -:::{tab-item} view the logs - -View the contents of standard output and error -(the node name and job ID will differ in each case): -```{code-block} console -$ cat slurm.gpu-sr670-20.3445652.out -$ cat slurm.gpu-sr670-20.3445652.err -``` -::: - -:::: - -### Evaluate the trained models -Upon successful completion of the training job, a `models` folder will have -been created in the training job directory. It contains one subfolder per -training run (by default prefixed with the date and time of the run). - -```{code-block} console -$ cd /ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data -$ cd labels.v001.slp.training_job -$ cd models -$ ls -1 -230509_141357.centered_instance -230509_141357.centroid -``` - -Each subfolder holds the trained model files (e.g. `best_model.h5`), -their configurations (`training_config.json`) and some evaluation metrics. - -```{code-block} console -$ cd 230509_141357.centered_instance -$ ls -1 -best_model.h5 -initial_config.json -labels_gt.train.slp -labels_gt.val.slp -labels_pr.train.slp -labels_pr.val.slp -metrics.train.npz -metrics.val.npz -training_config.json -training_log.csv -``` -The SLEAP GUI on your local machine can be used to quickly evaluate the trained models. - -- Select *Predict* -> *Evaluation Metrics for Trained Models...* -- Click on *Add Trained Models(s)* and select the folder containing the model(s) you want to evaluate. -- You can view the basic metrics on the shown table or you can also view a more detailed report (including plots) by clicking *View Metrics*. - -## Model inference -By inference, we mean using a trained model to predict the labels on new frames/videos. -SLEAP provides the `sleap-track` command line utility for running inference -on a single video or a folder of videos. - -Below is an example SLURM batch script that contains a `sleap-track` call. -```{code-block} bash -:caption: infer_slurm.sh -:name: infer-slurm-sh -:linenos: -#!/bin/bash - -#SBATCH -J slp_infer # job name -#SBATCH -p gpu # partition -#SBATCH -N 1 # number of nodes -#SBATCH --mem 16G # memory pool for all cores -#SBATCH -n 4 # number of cores -#SBATCH -t 0-02:00 # time (D-HH:MM) -#SBATCH --gres gpu:1 # request 1 GPU (of any kind) -#SBATCH -o slurm.%x.%N.%j.out # write STDOUT -#SBATCH -e slurm.%x.%N.%j.err # write STDERR -#SBATCH --mail-type=ALL -#SBATCH --mail-user=user@domain.com - -# Load the SLEAP module -module load SLEAP - -# Define directories for SLEAP project and exported training job -SLP_DIR=/ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data -VIDEO_DIR=$SLP_DIR/videos -SLP_JOB_NAME=labels.v001.slp.training_job -SLP_JOB_DIR=$SLP_DIR/$SLP_JOB_NAME - -# Go to the job directory -cd $SLP_JOB_DIR -# Make a directory to store the predictions -mkdir -p predictions - -# Run the inference command -sleap-track $VIDEO_DIR/M708149_EPM_20200317_165049331-converted.mp4 \ - -m $SLP_JOB_DIR/models/231010_164307.centroid/training_config.json \ - -m $SLP_JOB_DIR/models/231010_164307.centered_instance/training_config.json \ - --gpu auto \ - --tracking.tracker simple \ - --tracking.similarity centroid \ - --tracking.post_connect_single_breaks 1 \ - -o predictions/labels.v001.slp.predictions.slp \ - --verbosity json \ - --no-empty-frames -``` -The script is very similar to the training script, with the following differences: -- The time limit `-t` is set lower, since inference is normally faster than training. This will however depend on the size of the video and the number of models used. -- The `./train-script.sh` line is replaced by the `sleap-track` command. -- The `\` character is used to split the long `sleap-track` command into multiple lines for readability. It is not necessary if the command is written on a single line. - -::: {dropdown} Explanation of the sleap-track arguments -:color: info -:icon: info - - Some important command line arguments are explained below. - You can view a full list of the available arguments by running `sleap-track --help`. -- The first argument is the path to the video file to be processed. -- The `-m` option is used to specify the path to the model configuration file(s) to be used for inference. In this example we use the two models that were trained above. -- The `--gpu` option is used to specify the GPU to be used for inference. The `auto` value will automatically select the GPU with the highest percentage of available memory (of the GPUs that are available on the machine/node) -- The options starting with `--tracking` specify parameters used for tracking the detected instances (animals) across frames. See SLEAP's guide on [tracking methods](https://sleap.ai/guides/proofreading.html#tracking-method-details) for more info. -- The `-o` option is used to specify the path to the output file containing the predictions. -- The above script will predict all the frames in the video. You may select specific frames via the `--frames` option. For example: `--frames 1-50` or `--frames 1,3,5,7,9`. -::: - -You can submit and monitor the inference job in the same way as the training job. -```{code-block} console -$ sbatch infer_slurm.sh -$ squeue --me -``` -Upon completion, a `labels.v001.slp.predictions.slp` file will have been created in the job directory. - -You can use the SLEAP GUI on your local machine to load and view the predictions: -*File* -> *Open Project...* -> select the `labels.v001.slp.predictions.slp` file. - -## The training-inference cycle -Now that you have some predictions, you can keep improving your models by repeating -the training-inference cycle. The basic steps are: -- Manually correct some of the predictions: see [Prediction-assisted labeling](https://sleap.ai/tutorials/assisted-labeling.html) -- Merge corrected labels into the initial training set: see [Merging guide](https://sleap.ai/guides/merging.html) -- Save the merged training set as `labels.v002.slp` -- Export a new training job `labels.v002.slp.training_job` (you may reuse the training configurations from `v001`) -- Repeat the training-inference cycle until satisfied - -## Troubleshooting - -### Problems with the SLEAP module - -In this section, we will describe how to test that the SLEAP module is loaded -correctly for you and that it can use the available GPUs. - -Login to the HPC cluster as described [above](#access-to-the-hpc-cluster). - -Start an interactive job on a GPU node. This step is necessary, because we need -to test the module's access to the GPU. -```{code-block} console -$ srun -p fast --gres=gpu:1 --pty bash -i -``` -:::{dropdown} Explain the above command -:color: info -:icon: info - -* `-p fast` requests a node from the 'fast' partition. This refers to the queue of nodes with a 3-hour time limit. They are meant for short jobs, such as testing. -* `--gres=gpu:1` requests 1 GPU of any kind -* `--pty` is short for 'pseudo-terminal'. -* The `-i` stands for 'interactive' - -Taken together, the above command will start an interactive bash terminal session -on a node of the 'fast' partition, equipped with 1 GPU. -::: - -First, let's verify that you are indeed on a node equipped with a functional -GPU, by typing `nvidia-smi`: -```{code-block} console -$ nvidia-smi -Wed Sep 27 10:34:35 2023 -+-----------------------------------------------------------------------------+ -| NVIDIA-SMI 525.125.06 Driver Version: 525.125.06 CUDA Version: 12.0 | -|-------------------------------+----------------------+----------------------+ -| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | -| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | -| | | MIG M. | -|===============================+======================+======================| -| 0 NVIDIA GeForce ... Off | 00000000:41:00.0 Off | N/A | -| 0% 42C P8 22W / 240W | 1MiB / 8192MiB | 0% Default | -| | | N/A | -+-------------------------------+----------------------+----------------------+ - -+-----------------------------------------------------------------------------+ -| Processes: | -| GPU GI CI PID Type Process name GPU Memory | -| ID ID Usage | -|=============================================================================| -| No running processes found | -+-----------------------------------------------------------------------------+ -``` -Your output should look similar to the above. You will be able to see the GPU -name, temperature, memory usage, etc. If you see an error message instead, -(even though you are on a GPU node) please contact the SWC Scientific Computing team. - -Next, load the SLEAP module. -```{code-block} console -$ module load SLEAP -Loading SLEAP/2023-08-01 - Loading requirement: cuda/11.8 -``` - -To verify that the module was loaded successfully: -```{code-block} console -$ module list -Currently Loaded Modulefiles: - 1) SLEAP/2023-08-01 -``` -You can essentially think of the module as a centrally installed conda environment. -When it is loaded, you should be using a particular Python executable. -You can verify this by running: - -```{code-block} console -$ which python -/ceph/apps/ubuntu-20/packages/SLEAP/2023-08-01/bin/python -``` - -Finally we will verify that the `sleap` python package can be imported and can -'see' the GPU. We will mostly just follow the -[relevant SLEAP instructions](https://sleap.ai/installation.html#testing-that-things-are-working). -First, start a Python interpreter: -```{code-block} console -$ python -``` -Next, run the following Python commands: - -::: {warning} -The `import sleap` command may take some time to run (more than a minute). -This is normal. Subsequent imports should be faster. -::: - -```{code-block} pycon ->>> import sleap - ->>> sleap.versions() -SLEAP: 1.3.1 -TensorFlow: 2.8.4 -Numpy: 1.21.6 -Python: 3.7.12 -OS: Linux-5.4.0-109-generic-x86_64-with-debian-bullseye-sid - ->>> sleap.system_summary() -GPUs: 1/1 available - Device: /physical_device:GPU:0 - Available: True - Initialized: False - Memory growth: None - ->>> import tensorflow as tf - ->>> print(tf.config.list_physical_devices('GPU')) -[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] - ->>> tf.constant("Hello world!") - -``` - -If all is as expected, you can exit the Python interpreter, and then exit the GPU node -```{code-block} pycon ->>> exit() -``` -```{code-block} console -$ exit() -``` -If you encounter troubles with using the SLEAP module, contact the -Niko Sirmpilatze of the SWC [Neuroinformatics Unit](https://neuroinformatics.dev/). - -To completely exit the HPC cluster, you will need to logout of the SSH session twice: -```bash -$ logout -$ logout -``` -See [Set up SSH for the SWC HPC cluster](../programming/SSH-SWC-cluster.md) -for more information. diff --git a/_sources/data_analysis/index.md.txt b/_sources/data_analysis/index.md.txt index ce094e5..0e020c6 100644 --- a/_sources/data_analysis/index.md.txt +++ b/_sources/data_analysis/index.md.txt @@ -5,5 +5,4 @@ Guides related to the analysis of neuroscientific data, spanning a wide range of ```{toctree} :maxdepth: 1 -HPC-module-SLEAP ``` diff --git a/_sources/programming/Mount-ceph-ubuntu-temp.md.txt b/_sources/programming/Mount-ceph-ubuntu-temp.md.txt new file mode 100644 index 0000000..b6f7fbf --- /dev/null +++ b/_sources/programming/Mount-ceph-ubuntu-temp.md.txt @@ -0,0 +1,54 @@ +(target-mount-ceph-ubuntu-temp)= +# Mount ceph storage on Ubuntu temporarily +In this example, we will **temporarily** mount a partition of `ceph` in an Ubuntu machine. The mounting is temporary because it will only remain until the next reboot. To mount a partition of `ceph` permanently, see [Mount ceph storage on Ubuntu permanently](target-mount-ceph-ubuntu-perm). + + +## Prerequisites +- Administrator rights (`sudo`) on the local Ubuntu machine +- `cifs-utils` installed via `sudo apt-get install cifs-utils` + + +## Steps +### 1. Create a mount point +Create a directory to mount the storage to. Sensible places to do this on Ubuntu would be in `/mnt` or `/media`. In the example below, we will use `/mnt/ceph-neuroinformatics`. + +```bash +sudo mkdir /mnt/ceph-neuroinformatics +``` + +### 2. Mount the `ceph` partition +To mount the desired partition on the directory we just created, run the `mount` command with the appropriate options. In our example, this would be: +```bash +sudo mount -t cifs -o user=,domain=AD.SWC.UCL.AC.UK //ceph-gw02.hpc.swc.ucl.ac.uk/neuroinformatics /mnt/ceph-neuroinformatics +``` +:::{note} +You can check the full set of options for the `mount` command by running `mount --help` +::: + +Make sure to replace the following for your particular case: +- `//ceph-gw02.hpc.swc.ucl.ac.uk/neuroinformatics` with the path to your desired partition, e.g. `//ceph-gw02.hpc.swc.ucl.ac.uk/` +- `/media/ceph-neuroinformatics` with the path to the local mount point you created in the previous step. +- `` with your SWC username + +If the command is executed without errors you will be prompted for your SWC password. + +### 3. Check the partition is mounted correctly +It should show up in the list of mounting points when running `df -h` + +:::{note} +The command `df` prints out information about the file system +::: + +### 4. To unmount the storage +Run the following command +```bash +sudo umount /mnt/ceph-neuroinformatics +``` +Remember that because the mounting is temporary, the `ceph` partition will be unmounted upon rebooting our machine. + +You can check that the partition is correctly unmounted by running `df -h` and verifying it does not show in the output. + + +## References +- [Mount network drive under Linux temporarily/permanently](https://www.rz.uni-kiel.de/en/hints-howtos/connecting-a-network-share/Linux/through-temporary-permanent-mounting/mount-network-drive-under-linux-temporarily-permanently) +- [How to Mount a SMB Share in Ubuntu](https://support.zadarastorage.com/hc/en-us/articles/213024986-How-to-Mount-a-SMB-Share-in-Ubuntu) diff --git a/_sources/programming/Mount-ceph-ubuntu.md.txt b/_sources/programming/Mount-ceph-ubuntu.md.txt index 2a54eb2..a8cb02e 100644 --- a/_sources/programming/Mount-ceph-ubuntu.md.txt +++ b/_sources/programming/Mount-ceph-ubuntu.md.txt @@ -1,13 +1,18 @@ -# Mount ceph storage on Ubuntu +(target-mount-ceph-ubuntu-perm)= +# Mount ceph storage on Ubuntu permanently In this example, we will **permanently** mount the `neuroinformatics` partition of `ceph`. The same procedure can be used to mount other partitions, such as those belonging to particular labs or projects. +To mount a partition of `ceph` only temporarily, see [Mount ceph storage on Ubuntu temporarily](target-mount-ceph-ubuntu-temp). + + The following instructions have been tested on **Ubuntu 20.04 LTS**. ## Prerequisites - Administrator rights (`sudo`) on the local Ubuntu machine - `cifs-utils` installed via `sudo apt-get install cifs-utils` -## Store your SWC credentials +## Steps +### 1. Store your SWC credentials First, create a file to store your SWC login credentials and save it in your home directory. You can do that on the terminal via `nano ~/.smb_swc`. @@ -32,14 +37,14 @@ This will ensure the file is readable and writable only by the owner (you). However, if someone gets access to your machine with your user logged in, they will still be able to read the SWC password. ::: -## Create a mount point +### 2. Create a mount point Create a directory to mount the storage to. Sensible places to do this on Ubuntu would be in `/mnt` or `/media`. In the example below, we will use `/media/ceph-neuroinformatics`. ```bash sudo mkdir /media/ceph-neuroinformatics ``` -## Edit the fstab file +### 3. Edit the fstab file This will allow you to mount the storage automatically on boot. Before editing the file, make a backup of it, just in case. ```bash sudo cp /etc/fstab /etc/fstab.bak @@ -59,12 +64,12 @@ Make sure to replace the following: Save the file and exit. -## Mount the storage +### 4. Mount the storage You can now mount the storage by running `sudo mount -a`. If you get an error, check the `fstab` file for typos. If you get no error, you should be able to see the mounted storage by running `df -h`. The next time you reboot your machine, the storage should be mounted automatically. -## Unmount the storage +### 5. Unmount the storage To unmount the storage, run `sudo umount /media/ceph-neuroinformatics` (or whatever your mount point is). To permanently unmount the storage, remove the lines you added to the `fstab` file and run `sudo mount -a` again. diff --git a/_sources/programming/SLURM-arguments.md.txt b/_sources/programming/SLURM-arguments.md.txt deleted file mode 100644 index 6bfb81b..0000000 --- a/_sources/programming/SLURM-arguments.md.txt +++ /dev/null @@ -1,147 +0,0 @@ -(slurm-arguments-target)= -# SLURM arguments primer - -```{include} ../_static/swc-wiki-warning.md -``` - -## Abbreviations -| Acronym | Meaning | -| ------- | -------------------------------------------- | -| SWC | Sainsbury Wellcome Centre | -| HPC | High Performance Computing | -| SLURM | Simple Linux Utility for Resource Management | -| MPI | Message Passing Interface | - - -## Overview -SLURM is a job scheduler and resource manager used on the SWC HPC cluster. -It is responsible for allocating resources (e.g. CPU cores, GPUs, memory) to jobs submitted by users. -When submitting a job to SLURM, you can specify various arguments to request the resources you need. -These are called SLURM directives, and they are passed to SLURM via the `sbatch` or `srun` commands. - -These are often specified at the top of a SLURM job script, -e.g. the lines that start with `#SBATCH` in the following example: - -```{code-block} bash -#!/bin/bash - -#SBATCH -J my_job # job name -#SBATCH -p gpu # partition (queue) -#SBATCH -N 1 # number of nodes -#SBATCH --mem 16G # memory pool for all cores -#SBATCH -n 4 # number of cores -#SBATCH -t 0-06:00 # time (D-HH:MM) -#SBATCH --gres gpu:1 # request 1 GPU (of any kind) -#SBATCH -o slurm.%x.%N.%j.out # STDOUT -#SBATCH -e slurm.%x.%N.%j.err # STDERR -#SBATCH --mail-type=ALL -#SBATCH --mail-user=user@domain.com -#SBATCH --array=1-12%4 # job array index values - -# load modules -... - -# execute commands -... -``` -This guide provides only a brief overview of the most important SLURM arguments, -to demysify the above directives and help you get started with SLURM. -For a more detailed description see the [SLURM documentation](https://slurm.schedmd.com/sbatch.html). - -## Commonly used arguments - -### Partition (Queue) -- *Name:* `--partition` -- *Alias:* `-p` -- *Description:* Specifies the partition (or queue) to submit the job to. To see a list of all partitions/queues, the nodes they contain and their respective time limits, type `sinfo` when logged in to the HPC cluster. -- *Example values:* `gpu`, `cpu`, `fast`, `medium` - -### Job Name -- *Name:* `--job-name` -- *Alias:* `-J` -- *Description:* Specifies a name for the job, which will appear in various SLURM commands and logs, making it easier to identify the job (especially when you have multiple jobs queued up) -- *Example values:* `training_run_24` - -### Number of Nodes -- *Name:* `--nodes` -- *Alias:* `-N` -- *Description:* Defines the number of nodes required for the job. -- *Example values:* `1` - -:::{warning} -This should always be `1`, unless you really know what you're doing, -e.g. you are parallelising your code across multiple nodes with MPI. -::: - -### Number of Cores -- *Name:* `--ntasks` -- *Alias:* `-n` -- *Description:* Defines the number of cores (or tasks) required for the job. -- *Example values:* `4`, `8`, `16` - -### Memory Pool for All Cores -- *Name:* `--mem` -- *Description:* Specifies the total amount of memory (RAM) required for the job across all cores (per node) -- *Example values:* `8G`, `16G`, `32G` - -### Time Limit -- *Name:* `--time` -- *Alias:* `-t` -- *Description:* Sets the maximum time the job is allowed to run. The format is D-HH:MM, where D is days, HH is hours, and MM is minutes. -- *Example values:* `0-01:00` (1 hour), `0-04:00` (4 hours), `1-00:00` (1 day). - -:::{warning} -If the job exceeds the time limit, it will be terminated by SLURM. -On the other hand, avoid requesting way more time than what your job needs, -as this may delay its scheduling (depending on resource availability). -::: - -### Generic Resources (GPUs) -- *Name:* `--gres` -- *Description:* Requests generic resources, such as GPUs. -- *Example values:* `gpu:1`, `gpu:rtx2080:1`, `gpu:rtx5000:1`, `gpu:a100_2g.10gb:1` - -:::{warning} -No GPU will be allocated to you unless you specify it via the `--gres` argument (even if you are on the 'gpu' partition). -To request 1 GPU of any kind, use `--gres gpu:1`. To request a specific GPU type, you have to include its name, e.g. `--gres gpu:rtx2080:1`. -You can view the available GPU types on the [SWC internal wiki](https://wiki.ucl.ac.uk/display/SSC/CPU+and+GPU+Platform+architecture). -::: - -### Standard Output File -- *Name:* `--output` -- *Alias:* `-o` -- *Description:* Defines the file where the standard output (STDOUT) will be written. In the example script above, it's set to slurm.%x.%N.%j.out, where %x is the job name, %N is the node name and %j is the job ID. -- *Example values:* `slurm.%x.%N.%j.out`, `slurm.MyAwesomeJob.out` - -:::{note} -This file contains the output of the commands executed by the job (i.e. the messages that normally gets printed on the terminal). -::: - -### Standard Error File -- *Name:* `--error` -- *Alias:* `-e` -- *Description:* Specifies the file where the standard error (STDERR) will be written. In the example script above, it's set to slurm.%x.%N.%j.out, where %x is the job name, %N is the node name and %j is the job ID. -- *Example values:* `slurm.%N.%j.err`, `slurm.MyAwesomeJob.err` - -:::{note} -This file is very useful for debugging, as it contains all the error messages produced by the commands executed by the job. -::: - -### Email Notifications -- *Name:* `--mail-type` -- *Description:* Defines the conditions under which the user will be notified by email. -- *Example values:* `ALL`, `BEGIN`, `END`, `FAIL` - -### Email Address -- *Name:* `--mail-user` -- *Description:* Specifies the email address to which notifications will be sent. -- *Example values:* `user@domain.com` - -### Array jobs -- *Name:* `--array` -- *Description:* Job array index values (a list of integers in increasing order). The task index can be accessed via the `SLURM_ARRAY_TASK_ID` environment variable. -- *Example values:* `--array=1-10` (10 jobs), `--array=1-100%5` (100 jobs, but only 5 of them will be allowed to run in parallel at any given time). - -:::{warning} -If an array consists of many jobs, using the `%` syntax to limit the maximum number of parallel jobs is recommended to prevent overloading the cluster. -::: diff --git a/_sources/programming/SSH-SWC-cluster.md.txt b/_sources/programming/SSH-SWC-cluster.md.txt index 36e8241..549445c 100644 --- a/_sources/programming/SSH-SWC-cluster.md.txt +++ b/_sources/programming/SSH-SWC-cluster.md.txt @@ -1,4 +1,3 @@ -(ssh-cluster-target)= # Set up SSH for the SWC HPC cluster This guide explains how to connect to the SWC's HPC cluster via SSH. @@ -10,14 +9,14 @@ This guide explains how to connect to the SWC's HPC cluster via SSH. ``` ## Abbreviations -| Acronym | Meaning | -| ------- | -------------------------------------------- | -| SWC | Sainsbury Wellcome Centre | -| HPC | High Performance Computing | -| SLURM | Simple Linux Utility for Resource Management | -| SSH | Secure (Socket) Shell protocol | -| IDE | Integrated Development Environment | -| GUI | Graphical User Interface | +| Acronym | Meaning | +| --- | --- | +| SWC | Sainsbury Wellcome Centre | +| HPC | High Performance Computing | +| SLURM | Simple Linux Utility for Resource Management | +| SSH | Secure (Socket) Shell protocol | +| IDE | Integrated Development Environment | +| GUI | Graphical User Interface | ## Prerequisites - You have an SWC account and know your username and password. diff --git a/_sources/programming/index.md.txt b/_sources/programming/index.md.txt index 32fb11c..629588e 100644 --- a/_sources/programming/index.md.txt +++ b/_sources/programming/index.md.txt @@ -7,10 +7,10 @@ Small tips and tricks that do not warrant a long-form guide can be found in the ```{toctree} :maxdepth: 1 -SLURM-arguments SSH-SWC-cluster SSH-vscode Mount-ceph-ubuntu +Mount-ceph-ubuntu-temp Cookiecutter-cruft Troubleshooting ``` diff --git a/_static/check-solid.svg b/_static/check-solid.svg deleted file mode 100644 index 92fad4b..0000000 --- a/_static/check-solid.svg +++ /dev/null @@ -1,4 +0,0 @@ - - - - diff --git a/_static/clipboard.min.js b/_static/clipboard.min.js deleted file mode 100644 index 54b3c46..0000000 --- a/_static/clipboard.min.js +++ /dev/null @@ -1,7 +0,0 @@ -/*! - * clipboard.js v2.0.8 - * https://clipboardjs.com/ - * - * Licensed MIT © Zeno Rocha - */ -!function(t,e){"object"==typeof exports&&"object"==typeof module?module.exports=e():"function"==typeof define&&define.amd?define([],e):"object"==typeof exports?exports.ClipboardJS=e():t.ClipboardJS=e()}(this,function(){return n={686:function(t,e,n){"use strict";n.d(e,{default:function(){return o}});var e=n(279),i=n.n(e),e=n(370),u=n.n(e),e=n(817),c=n.n(e);function a(t){try{return document.execCommand(t)}catch(t){return}}var f=function(t){t=c()(t);return a("cut"),t};var l=function(t){var e,n,o,r=1 - - - - diff --git a/_static/copybutton.css b/_static/copybutton.css deleted file mode 100644 index f1916ec..0000000 --- a/_static/copybutton.css +++ /dev/null @@ -1,94 +0,0 @@ -/* Copy buttons */ -button.copybtn { - position: absolute; - display: flex; - top: .3em; - right: .3em; - width: 1.7em; - height: 1.7em; - opacity: 0; - transition: opacity 0.3s, border .3s, background-color .3s; - user-select: none; - padding: 0; - border: none; - outline: none; - border-radius: 0.4em; - /* The colors that GitHub uses */ - border: #1b1f2426 1px solid; - background-color: #f6f8fa; - color: #57606a; -} - -button.copybtn.success { - border-color: #22863a; - color: #22863a; -} - -button.copybtn svg { - stroke: currentColor; - width: 1.5em; - height: 1.5em; - padding: 0.1em; -} - -div.highlight { - position: relative; -} - -/* Show the copybutton */ -.highlight:hover button.copybtn, button.copybtn.success { - opacity: 1; -} - -.highlight button.copybtn:hover { - background-color: rgb(235, 235, 235); -} - -.highlight button.copybtn:active { - background-color: rgb(187, 187, 187); -} - -/** - * A minimal CSS-only tooltip copied from: - * https://codepen.io/mildrenben/pen/rVBrpK - * - * To use, write HTML like the following: - * - *

Short

- */ - .o-tooltip--left { - position: relative; - } - - .o-tooltip--left:after { - opacity: 0; - visibility: hidden; - position: absolute; - content: attr(data-tooltip); - padding: .2em; - font-size: .8em; - left: -.2em; - background: grey; - color: white; - white-space: nowrap; - z-index: 2; - border-radius: 2px; - transform: translateX(-102%) translateY(0); - transition: opacity 0.2s cubic-bezier(0.64, 0.09, 0.08, 1), transform 0.2s cubic-bezier(0.64, 0.09, 0.08, 1); -} - -.o-tooltip--left:hover:after { - display: block; - opacity: 1; - visibility: visible; - transform: translateX(-100%) translateY(0); - transition: opacity 0.2s cubic-bezier(0.64, 0.09, 0.08, 1), transform 0.2s cubic-bezier(0.64, 0.09, 0.08, 1); - transition-delay: .5s; -} - -/* By default the copy button shouldn't show up when printing a page */ -@media print { - button.copybtn { - display: none; - } -} diff --git a/_static/copybutton.js b/_static/copybutton.js deleted file mode 100644 index 39cbaa7..0000000 --- a/_static/copybutton.js +++ /dev/null @@ -1,248 +0,0 @@ -// Localization support -const messages = { - 'en': { - 'copy': 'Copy', - 'copy_to_clipboard': 'Copy to clipboard', - 'copy_success': 'Copied!', - 'copy_failure': 'Failed to copy', - }, - 'es' : { - 'copy': 'Copiar', - 'copy_to_clipboard': 'Copiar al portapapeles', - 'copy_success': '¡Copiado!', - 'copy_failure': 'Error al copiar', - }, - 'de' : { - 'copy': 'Kopieren', - 'copy_to_clipboard': 'In die Zwischenablage kopieren', - 'copy_success': 'Kopiert!', - 'copy_failure': 'Fehler beim Kopieren', - }, - 'fr' : { - 'copy': 'Copier', - 'copy_to_clipboard': 'Copier dans le presse-papier', - 'copy_success': 'Copié !', - 'copy_failure': 'Échec de la copie', - }, - 'ru': { - 'copy': 'Скопировать', - 'copy_to_clipboard': 'Скопировать в буфер', - 'copy_success': 'Скопировано!', - 'copy_failure': 'Не удалось скопировать', - }, - 'zh-CN': { - 'copy': '复制', - 'copy_to_clipboard': '复制到剪贴板', - 'copy_success': '复制成功!', - 'copy_failure': '复制失败', - }, - 'it' : { - 'copy': 'Copiare', - 'copy_to_clipboard': 'Copiato negli appunti', - 'copy_success': 'Copiato!', - 'copy_failure': 'Errore durante la copia', - } -} - -let locale = 'en' -if( document.documentElement.lang !== undefined - && messages[document.documentElement.lang] !== undefined ) { - locale = document.documentElement.lang -} - -let doc_url_root = DOCUMENTATION_OPTIONS.URL_ROOT; -if (doc_url_root == '#') { - doc_url_root = ''; -} - -/** - * SVG files for our copy buttons - */ -let iconCheck = ` - ${messages[locale]['copy_success']} - - -` - -// If the user specified their own SVG use that, otherwise use the default -let iconCopy = ``; -if (!iconCopy) { - iconCopy = ` - ${messages[locale]['copy_to_clipboard']} - - - -` -} - -/** - * Set up copy/paste for code blocks - */ - -const runWhenDOMLoaded = cb => { - if (document.readyState != 'loading') { - cb() - } else if (document.addEventListener) { - document.addEventListener('DOMContentLoaded', cb) - } else { - document.attachEvent('onreadystatechange', function() { - if (document.readyState == 'complete') cb() - }) - } -} - -const codeCellId = index => `codecell${index}` - -// Clears selected text since ClipboardJS will select the text when copying -const clearSelection = () => { - if (window.getSelection) { - window.getSelection().removeAllRanges() - } else if (document.selection) { - document.selection.empty() - } -} - -// Changes tooltip text for a moment, then changes it back -// We want the timeout of our `success` class to be a bit shorter than the -// tooltip and icon change, so that we can hide the icon before changing back. -var timeoutIcon = 2000; -var timeoutSuccessClass = 1500; - -const temporarilyChangeTooltip = (el, oldText, newText) => { - el.setAttribute('data-tooltip', newText) - el.classList.add('success') - // Remove success a little bit sooner than we change the tooltip - // So that we can use CSS to hide the copybutton first - setTimeout(() => el.classList.remove('success'), timeoutSuccessClass) - setTimeout(() => el.setAttribute('data-tooltip', oldText), timeoutIcon) -} - -// Changes the copy button icon for two seconds, then changes it back -const temporarilyChangeIcon = (el) => { - el.innerHTML = iconCheck; - setTimeout(() => {el.innerHTML = iconCopy}, timeoutIcon) -} - -const addCopyButtonToCodeCells = () => { - // If ClipboardJS hasn't loaded, wait a bit and try again. This - // happens because we load ClipboardJS asynchronously. - if (window.ClipboardJS === undefined) { - setTimeout(addCopyButtonToCodeCells, 250) - return - } - - // Add copybuttons to all of our code cells - const COPYBUTTON_SELECTOR = 'div.highlight pre'; - const codeCells = document.querySelectorAll(COPYBUTTON_SELECTOR) - codeCells.forEach((codeCell, index) => { - const id = codeCellId(index) - codeCell.setAttribute('id', id) - - const clipboardButton = id => - `` - codeCell.insertAdjacentHTML('afterend', clipboardButton(id)) - }) - -function escapeRegExp(string) { - return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string -} - -/** - * Removes excluded text from a Node. - * - * @param {Node} target Node to filter. - * @param {string} exclude CSS selector of nodes to exclude. - * @returns {DOMString} Text from `target` with text removed. - */ -function filterText(target, exclude) { - const clone = target.cloneNode(true); // clone as to not modify the live DOM - if (exclude) { - // remove excluded nodes - clone.querySelectorAll(exclude).forEach(node => node.remove()); - } - return clone.innerText; -} - -// Callback when a copy button is clicked. Will be passed the node that was clicked -// should then grab the text and replace pieces of text that shouldn't be used in output -function formatCopyText(textContent, copybuttonPromptText, isRegexp = false, onlyCopyPromptLines = true, removePrompts = true, copyEmptyLines = true, lineContinuationChar = "", hereDocDelim = "") { - var regexp; - var match; - - // Do we check for line continuation characters and "HERE-documents"? - var useLineCont = !!lineContinuationChar - var useHereDoc = !!hereDocDelim - - // create regexp to capture prompt and remaining line - if (isRegexp) { - regexp = new RegExp('^(' + copybuttonPromptText + ')(.*)') - } else { - regexp = new RegExp('^(' + escapeRegExp(copybuttonPromptText) + ')(.*)') - } - - const outputLines = []; - var promptFound = false; - var gotLineCont = false; - var gotHereDoc = false; - const lineGotPrompt = []; - for (const line of textContent.split('\n')) { - match = line.match(regexp) - if (match || gotLineCont || gotHereDoc) { - promptFound = regexp.test(line) - lineGotPrompt.push(promptFound) - if (removePrompts && promptFound) { - outputLines.push(match[2]) - } else { - outputLines.push(line) - } - gotLineCont = line.endsWith(lineContinuationChar) & useLineCont - if (line.includes(hereDocDelim) & useHereDoc) - gotHereDoc = !gotHereDoc - } else if (!onlyCopyPromptLines) { - outputLines.push(line) - } else if (copyEmptyLines && line.trim() === '') { - outputLines.push(line) - } - } - - // If no lines with the prompt were found then just use original lines - if (lineGotPrompt.some(v => v === true)) { - textContent = outputLines.join('\n'); - } - - // Remove a trailing newline to avoid auto-running when pasting - if (textContent.endsWith("\n")) { - textContent = textContent.slice(0, -1) - } - return textContent -} - - -var copyTargetText = (trigger) => { - var target = document.querySelector(trigger.attributes['data-clipboard-target'].value); - - // get filtered text - let exclude = '.linenos, .gp, .go'; - - let text = filterText(target, exclude); - return formatCopyText(text, '', false, true, true, true, '', '') -} - - // Initialize with a callback so we can modify the text before copy - const clipboard = new ClipboardJS('.copybtn', {text: copyTargetText}) - - // Update UI with error/success messages - clipboard.on('success', event => { - clearSelection() - temporarilyChangeTooltip(event.trigger, messages[locale]['copy'], messages[locale]['copy_success']) - temporarilyChangeIcon(event.trigger) - }) - - clipboard.on('error', event => { - temporarilyChangeTooltip(event.trigger, messages[locale]['copy'], messages[locale]['copy_failure']) - }) -} - -runWhenDOMLoaded(addCopyButtonToCodeCells) \ No newline at end of file diff --git a/_static/copybutton_funcs.js b/_static/copybutton_funcs.js deleted file mode 100644 index dbe1aaa..0000000 --- a/_static/copybutton_funcs.js +++ /dev/null @@ -1,73 +0,0 @@ -function escapeRegExp(string) { - return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&'); // $& means the whole matched string -} - -/** - * Removes excluded text from a Node. - * - * @param {Node} target Node to filter. - * @param {string} exclude CSS selector of nodes to exclude. - * @returns {DOMString} Text from `target` with text removed. - */ -export function filterText(target, exclude) { - const clone = target.cloneNode(true); // clone as to not modify the live DOM - if (exclude) { - // remove excluded nodes - clone.querySelectorAll(exclude).forEach(node => node.remove()); - } - return clone.innerText; -} - -// Callback when a copy button is clicked. Will be passed the node that was clicked -// should then grab the text and replace pieces of text that shouldn't be used in output -export function formatCopyText(textContent, copybuttonPromptText, isRegexp = false, onlyCopyPromptLines = true, removePrompts = true, copyEmptyLines = true, lineContinuationChar = "", hereDocDelim = "") { - var regexp; - var match; - - // Do we check for line continuation characters and "HERE-documents"? - var useLineCont = !!lineContinuationChar - var useHereDoc = !!hereDocDelim - - // create regexp to capture prompt and remaining line - if (isRegexp) { - regexp = new RegExp('^(' + copybuttonPromptText + ')(.*)') - } else { - regexp = new RegExp('^(' + escapeRegExp(copybuttonPromptText) + ')(.*)') - } - - const outputLines = []; - var promptFound = false; - var gotLineCont = false; - var gotHereDoc = false; - const lineGotPrompt = []; - for (const line of textContent.split('\n')) { - match = line.match(regexp) - if (match || gotLineCont || gotHereDoc) { - promptFound = regexp.test(line) - lineGotPrompt.push(promptFound) - if (removePrompts && promptFound) { - outputLines.push(match[2]) - } else { - outputLines.push(line) - } - gotLineCont = line.endsWith(lineContinuationChar) & useLineCont - if (line.includes(hereDocDelim) & useHereDoc) - gotHereDoc = !gotHereDoc - } else if (!onlyCopyPromptLines) { - outputLines.push(line) - } else if (copyEmptyLines && line.trim() === '') { - outputLines.push(line) - } - } - - // If no lines with the prompt were found then just use original lines - if (lineGotPrompt.some(v => v === true)) { - textContent = outputLines.join('\n'); - } - - // Remove a trailing newline to avoid auto-running when pasting - if (textContent.endsWith("\n")) { - textContent = textContent.slice(0, -1) - } - return textContent -} diff --git a/data_analysis/HPC-module-SLEAP.html b/data_analysis/HPC-module-SLEAP.html deleted file mode 100644 index c0b3585..0000000 --- a/data_analysis/HPC-module-SLEAP.html +++ /dev/null @@ -1,1149 +0,0 @@ - - - - - - - - - - - Use the SLEAP module on the HPC cluster — HowTo - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - -
-
-
-
-
- - - -
-
- -
- - - - - - - - - - - - - -
- -
- - -
-
- -
-
- -
- -
- - - - -
- -
- - -
-
- - - - - -
- -
-

Use the SLEAP module on the HPC cluster#

-
-

Warning

-

Some links within this document point to the -SWC internal wiki, -which is only accessible from within the SWC network.

-
-
- -Interpreting code blocks
-
-
-
-
-

Shell commands will be shown in code blocks like this -(with the $ sign indicating the shell prompt):

-
$ echo "Hello world!"
-
-
-

Similarly, Python code blocks will appear with the >>> sign indicating the -Python interpreter prompt:

-
>>> print("Hello world!")
-
-
-

The expected outputs of both Shell and Python commands will be shown without -any prompt:

-
Hello world!
-
-
-
-
-

Abbreviations#

- - - - - - - - - - - - - - - - - - - - - - - -

Acronym

Meaning

SLEAP

Social LEAP Estimates Animal Poses

SWC

Sainsbury Wellcome Centre

HPC

High Performance Computing

SLURM

Simple Linux Utility for Resource Management

GUI

Graphical User Interface

-
-
-

Prerequisites#

-
-

Access to the HPC cluster#

-

Verify that you can access HPC gateway node (typing your <SWC-PASSWORD> both times when prompted):

-
$ ssh <SWC-USERNAME>@ssh.swc.ucl.ac.uk
-$ ssh hpc-gw1
-
-
-

To learn more about accessing the HPC via SSH, see the relevant how-to guide.

-
-
-

Access to the SLEAP module#

-

Once you are on the HPC gateway node, SLEAP should be listed among the available modules when you run module avail:

-
$ module avail
-...
-SLEAP/2023-03-13
-SLEAP/2023-08-01
-...
-
-
-
    -
  • SLEAP/2023-03-13 corresponds to sleap v.1.2.9

  • -
  • SLEAP/2023-08-01 corresponds to sleap v.1.3.1

  • -
-

We recommend always using the latest version, which is the one loaded by default -when you run module load SLEAP. If you want to load a specific version, -you can do so by typing the full module name, -including the date e.g. module load SLEAP/2023-03-13.

-

If a module has been successfully loaded, it will be listed when you run module list, -along with other modules it may depend on:

-
$ module list
-Currently Loaded Modulefiles:
- 1) cuda/11.8   2) SLEAP/2023-08-01
-
-
-

If you have troubles with loading the SLEAP module, -see this guide’s Troubleshooting section.

-
-
-

Install SLEAP on your local PC/laptop#

-

While you can delegate the GPU-intensive work to the HPC cluster, -you will need to use the SLEAP GUI for some steps, such as labelling frames. -Thus, you also need to install SLEAP on your local PC/laptop.

-

We recommend following the official SLEAP installation guide. If you already have conda installed, you may skip the mamba installation steps and opt for installing the libmamba-solver for conda:

-
$ conda install -n base conda-libmamba-solver
-$ conda config --set solver libmamba
-
-
-

This will get you the much faster dependency resolution that mamba provides, without having to install mamba itself. -From conda version 23.10 onwards (released in November 2023), libmamba-solver is anyway the default.

-

After that, you can follow the rest of the SLEAP installation guide, substituting conda for mamba in the relevant commands.

-
- -
-
$ conda create -y -n sleap -c conda-forge -c nvidia -c sleap -c anaconda sleap=1.3.1
-
-
-
- -
-
$ conda create -y -n sleap -c conda-forge -c anaconda -c sleap sleap=1.3.1
-
-
-
-
-

You may exchange sleap=1.3.1 for other versions. To be on the safe side, ensure that your local installation version matches (or is at least close to) the one installed in the cluster module.

-
-
-

Mount the SWC filesystem on your local PC/laptop#

-

The rest of this guide assumes that you have mounted the SWC filesystem on your local PC/laptop. -If you have not done so, please follow the relevant instructions on the -SWC internal wiki.

-

We will also assume that the data you are working with are stored in a ceph or winstor -directory to which you have access to. In the rest of this guide, we will use the path -/ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data which contains a SLEAP project -for test purposes. You should replace this with the path to your own data.

-
- -Data storage location matters
-
-
-
-
-

The cluster has fast access to data stored on the ceph and winstor filesystems. -If your data is stored elsewhere, make sure to transfer it to ceph or winstor -before running the job. You can use tools such as rsync -to copy data from your local machine to ceph via an ssh connection. For example:

-
$ rsync -avz <LOCAL-DIR> <SWC-USERNAME>@ssh.swc.ucl.ac.uk:/ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data
-
-
-
-
-
-
-

Model training#

-

This will consist of two parts - preparing a training job -(on your local SLEAP installation) and running a training job -(on the HPC cluster’s SLEAP module). Some evaluation metrics for the trained models -can be viewed via the SLEAP GUI on your local SLEAP installation.

-
-

Prepare the training job#

-

Follow the SLEAP instructions for Creating a Project -and Initial Labelling. -Ensure that the project file (e.g. labels.v001.slp) is saved in the mounted SWC filesystem -(as opposed to your local filesystem).

-

Next, follow the instructions in Remote Training, -i.e. Predict -> Run Training… -> Export Training Job Package….

-
    -
  • For selecting the right configuration parameters, see Configuring Models and Troubleshooting Workflows

  • -
  • Set the Predict On parameter to nothing. Remote training and inference (prediction) are easiest to run separately on the HPC Cluster. Also unselect Visualize Predictions During Training in training settings, if it’s enabled by default.

  • -
  • If you are working with camera view from above or below (as opposed to a side view), set the Rotation Min Angle and Rotation Max Angle to -180 and 180 respectively in the Augmentation section.

  • -
  • Make sure to save the exported training job package (e.g. labels.v001.slp.training_job.zip) in the mounted SWC filesystem, for example, in the same directory as the project file.

  • -
  • Unzip the training job package. This will create a folder with the same name (minus the .zip extension). This folder contains everything needed to run the training job on the HPC cluster.

  • -
-
-
-

Run the training job#

-

Login to the HPC cluster as described above.

-
$ ssh <SWC-USERNAME>@ssh.swc.ucl.ac.uk
-$ ssh hpc-gw1
-
-
-

Navigate to the training job folder (replace with your own path) and list its contents:

-
$ cd /ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data
-$ cd labels.v001.slp.training_job
-$ ls -1
-centered_instance.json
-centroid.json
-inference-script.sh
-jobs.yaml
-labels.v001.pkg.slp
-labels.v001.slp.predictions.slp
-train_slurm.sh
-swc-hpc-pose-estimation
-train-script.sh
-
-
-

There should be a train-script.sh file created by SLEAP, which already contains the -commands to run the training. You can see the contents of the file by running cat train-script.sh:

-
-
labels.v001.slp.training_job/train-script.sh#
-
1#!/bin/bash
-2sleap-train centroid.json labels.v001.pkg.slp
-3sleap-train centered_instance.json labels.v001.pkg.slp
-
-
-
-

The precise commands will depend on the model configuration you chose in SLEAP. -Here we see two separate training calls, one for the ‘centroid’ and another for -the ‘centered_instance’ model. That’s because in this example we have chosen -the ‘Top-Down’ -configuration, which consists of two neural networks - the first for isolating -the animal instances (by finding their centroids) and the second for predicting -all the body parts per instance.

-

Top-Down model configuration

-
- -More on ‘Top-Down’ vs ‘Bottom-Up’ models
-
-
-
-
-

Although the ‘Top-Down’ configuration was designed with multiple animals in mind, -it can also be used for single-animal videos. It makes sense to use it for videos -where the animal occupies a relatively small portion of the frame - see -Troubleshooting Workflows for more info.

-
-

Next you need to create a SLURM batch script, which will schedule the training job -on the HPC cluster. Create a new file called train_slurm.sh -(you can do this in the terminal with nano/vim or in a text editor of -your choice on your local PC/laptop). Here we create the script in the same folder -as the training job, but you can save it anywhere you want, or even keep track of it with git.

-
$ nano train_slurm.sh
-
-
-

An example is provided below, followed by explanations.

-
-
train_slurm.sh#
-
 1#!/bin/bash
- 2
- 3#SBATCH -J slp_train # job name
- 4#SBATCH -p gpu # partition (queue)
- 5#SBATCH -N 1   # number of nodes
- 6#SBATCH --mem 16G # memory pool for all cores
- 7#SBATCH -n 4 # number of cores
- 8#SBATCH -t 0-06:00 # time (D-HH:MM)
- 9#SBATCH --gres gpu:1 # request 1 GPU (of any kind)
-10#SBATCH -o slurm.%x.%N.%j.out # STDOUT
-11#SBATCH -e slurm.%x.%N.%j.err # STDERR
-12#SBATCH --mail-type=ALL
-13#SBATCH --mail-user=user@domain.com
-14
-15# Load the SLEAP module
-16module load SLEAP
-17
-18# Define directories for SLEAP project and exported training job
-19SLP_DIR=/ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data
-20SLP_JOB_NAME=labels.v001.slp.training_job
-21SLP_JOB_DIR=$SLP_DIR/$SLP_JOB_NAME
-22
-23# Go to the job directory
-24cd $SLP_JOB_DIR
-25
-26# Run the training script generated by SLEAP
-27./train-script.sh
-
-
-
-

In nano, you can save the file by pressing Ctrl+O and exit by pressing Ctrl+X.

-
- -Explanation of the batch script
-
-
-
-
-
    -
  • The #SBATCH lines are SLURM directives. They specify the resources needed -for the job, such as the number of nodes, CPUs, memory, etc. -A primer on the most useful SLURM arguments is provided in this how-to guide. -For more information see the SLURM documentation.

  • -
  • The # lines are comments. They are not executed by SLURM, but they are useful -for explaining the script to your future self and others.

  • -
  • The module load SLEAP line loads the latest SLEAP module and any other modules -it may depend on.

  • -
  • The cd line changes the working directory to the training job folder. -This is necessary because the train-script.sh file contains relative paths -to the model configuration and the project file.

  • -
  • The ./train-script.sh line runs the training job (executes the contained commands).

  • -
-
-
-

Warning

-

Before submitting the job, ensure that you have permissions to execute -both the batch script and the training script generated by SLEAP. -You can make these files executable by running in the terminal:

-
$ chmod +x train-script.sh
-$ chmod +x train_slurm.sh
-
-
-

If the scripts are not in your working directory, you will need to specify their full paths:

-
$ chmod +x /path/to/train-script.sh
-$ chmod +x /path/to/train_slurm.sh
-
-
-
-

Now you can submit the batch script via running the following command -(in the same directory as the script):

-
$ sbatch train_slurm.sh
-Submitted batch job 3445652
-
-
-

You may monitor the progress of the job in various ways:

-
- -
-

View the status of the queued/running jobs with squeue:

-
$ squeue --me
-JOBID    PARTITION  NAME     USER      ST  TIME   NODES  NODELIST(REASON)
-3445652  gpu        slp_train sirmpila  R   23:11  1      gpu-sr670-20
-
-
-
- -
-

View status of running/completed jobs with sacct:

-
$ sacct
-JobID           JobName  Partition    Account  AllocCPUS      State ExitCode
------------- ---------- ---------- ---------- ---------- ---------- --------
-3445652      slp_train        gpu     swc-ac          2  COMPLETED      0:0
-3445652.bat+      batch                swc-ac          2  COMPLETED      0:0
-
-
-

Run sacct with some more helpful arguments. -For example, you can view jobs from the last 24 hours, displaying the time -elapsed and the peak memory usage in KB (MaxRSS):

-
$ sacct \
-  --starttime $(date -d '24 hours ago' +%Y-%m-%dT%H:%M:%S) \
-  --endtime $(date +%Y-%m-%dT%H:%M:%S) \
-  --format=JobID,JobName,Partition,State,Start,Elapsed,MaxRSS
-
-JobID           JobName  Partition      State               Start    Elapsed     MaxRSS
------------- ---------- ---------- ---------- ------------------- ---------- ----------
-4043595       slp_infer        gpu     FAILED 2023-10-10T18:14:31   00:00:35
-4043595.bat+      batch                FAILED 2023-10-10T18:14:31   00:00:35    271104K
-4043603       slp_infer        gpu     FAILED 2023-10-10T18:27:32   00:01:37
-4043603.bat+      batch                FAILED 2023-10-10T18:27:32   00:01:37    423476K
-4043611       slp_infer        gpu    PENDING             Unknown   00:00:00
-
-
-
- -
-

View the contents of standard output and error -(the node name and job ID will differ in each case):

-
$ cat slurm.gpu-sr670-20.3445652.out
-$ cat slurm.gpu-sr670-20.3445652.err
-
-
-
-
-
-
-

Evaluate the trained models#

-

Upon successful completion of the training job, a models folder will have -been created in the training job directory. It contains one subfolder per -training run (by default prefixed with the date and time of the run).

-
$ cd /ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data
-$ cd labels.v001.slp.training_job
-$ cd models
-$ ls -1
-230509_141357.centered_instance
-230509_141357.centroid
-
-
-

Each subfolder holds the trained model files (e.g. best_model.h5), -their configurations (training_config.json) and some evaluation metrics.

-
$ cd 230509_141357.centered_instance
-$ ls -1
-best_model.h5
-initial_config.json
-labels_gt.train.slp
-labels_gt.val.slp
-labels_pr.train.slp
-labels_pr.val.slp
-metrics.train.npz
-metrics.val.npz
-training_config.json
-training_log.csv
-
-
-

The SLEAP GUI on your local machine can be used to quickly evaluate the trained models.

-
    -
  • Select Predict -> Evaluation Metrics for Trained Models…

  • -
  • Click on Add Trained Models(s) and select the folder containing the model(s) you want to evaluate.

  • -
  • You can view the basic metrics on the shown table or you can also view a more detailed report (including plots) by clicking View Metrics.

  • -
-
-
-
-

Model inference#

-

By inference, we mean using a trained model to predict the labels on new frames/videos. -SLEAP provides the sleap-track command line utility for running inference -on a single video or a folder of videos.

-

Below is an example SLURM batch script that contains a sleap-track call.

-
-
infer_slurm.sh#
-
 1#!/bin/bash
- 2
- 3#SBATCH -J slp_infer # job name
- 4#SBATCH -p gpu # partition
- 5#SBATCH -N 1   # number of nodes
- 6#SBATCH --mem 16G # memory pool for all cores
- 7#SBATCH -n 4 # number of cores
- 8#SBATCH -t 0-02:00 # time (D-HH:MM)
- 9#SBATCH --gres gpu:1 # request 1 GPU (of any kind)
-10#SBATCH -o slurm.%x.%N.%j.out # write STDOUT
-11#SBATCH -e slurm.%x.%N.%j.err # write STDERR
-12#SBATCH --mail-type=ALL
-13#SBATCH --mail-user=user@domain.com
-14
-15# Load the SLEAP module
-16module load SLEAP
-17
-18# Define directories for SLEAP project and exported training job
-19SLP_DIR=/ceph/scratch/neuroinformatics-dropoff/SLEAP_HPC_test_data
-20VIDEO_DIR=$SLP_DIR/videos
-21SLP_JOB_NAME=labels.v001.slp.training_job
-22SLP_JOB_DIR=$SLP_DIR/$SLP_JOB_NAME
-23
-24# Go to the job directory
-25cd $SLP_JOB_DIR
-26# Make a directory to store the predictions
-27mkdir -p predictions
-28
-29# Run the inference command
-30sleap-track $VIDEO_DIR/M708149_EPM_20200317_165049331-converted.mp4 \
-31    -m $SLP_JOB_DIR/models/231010_164307.centroid/training_config.json \
-32    -m $SLP_JOB_DIR/models/231010_164307.centered_instance/training_config.json \
-33    --gpu auto \
-34    --tracking.tracker simple \
-35    --tracking.similarity centroid \
-36    --tracking.post_connect_single_breaks 1 \
-37    -o predictions/labels.v001.slp.predictions.slp \
-38    --verbosity json \
-39    --no-empty-frames
-
-
-
-

The script is very similar to the training script, with the following differences:

-
    -
  • The time limit -t is set lower, since inference is normally faster than training. This will however depend on the size of the video and the number of models used.

  • -
  • The ./train-script.sh line is replaced by the sleap-track command.

  • -
  • The \ character is used to split the long sleap-track command into multiple lines for readability. It is not necessary if the command is written on a single line.

  • -
-
- -Explanation of the sleap-track arguments
-
-
-
-
-

Some important command line arguments are explained below. -You can view a full list of the available arguments by running sleap-track --help.

-
    -
  • The first argument is the path to the video file to be processed.

  • -
  • The -m option is used to specify the path to the model configuration file(s) to be used for inference. In this example we use the two models that were trained above.

  • -
  • The --gpu option is used to specify the GPU to be used for inference. The auto value will automatically select the GPU with the highest percentage of available memory (of the GPUs that are available on the machine/node)

  • -
  • The options starting with --tracking specify parameters used for tracking the detected instances (animals) across frames. See SLEAP’s guide on tracking methods for more info.

  • -
  • The -o option is used to specify the path to the output file containing the predictions.

  • -
  • The above script will predict all the frames in the video. You may select specific frames via the --frames option. For example: --frames 1-50 or --frames 1,3,5,7,9.

  • -
-
-

You can submit and monitor the inference job in the same way as the training job.

-
$ sbatch infer_slurm.sh
-$ squeue --me
-
-
-

Upon completion, a labels.v001.slp.predictions.slp file will have been created in the job directory.

-

You can use the SLEAP GUI on your local machine to load and view the predictions: -File -> Open Project… -> select the labels.v001.slp.predictions.slp file.

-
-
-

The training-inference cycle#

-

Now that you have some predictions, you can keep improving your models by repeating -the training-inference cycle. The basic steps are:

-
    -
  • Manually correct some of the predictions: see Prediction-assisted labeling

  • -
  • Merge corrected labels into the initial training set: see Merging guide

  • -
  • Save the merged training set as labels.v002.slp

  • -
  • Export a new training job labels.v002.slp.training_job (you may reuse the training configurations from v001)

  • -
  • Repeat the training-inference cycle until satisfied

  • -
-
-
-

Troubleshooting#

-
-

Problems with the SLEAP module#

-

In this section, we will describe how to test that the SLEAP module is loaded -correctly for you and that it can use the available GPUs.

-

Login to the HPC cluster as described above.

-

Start an interactive job on a GPU node. This step is necessary, because we need -to test the module’s access to the GPU.

-
$ srun -p fast --gres=gpu:1 --pty bash -i
-
-
-
- -Explain the above command
-
-
-
-
-
    -
  • -p fast requests a node from the ‘fast’ partition. This refers to the queue of nodes with a 3-hour time limit. They are meant for short jobs, such as testing.

  • -
  • --gres=gpu:1 requests 1 GPU of any kind

  • -
  • --pty is short for ‘pseudo-terminal’.

  • -
  • The -i stands for ‘interactive’

  • -
-

Taken together, the above command will start an interactive bash terminal session -on a node of the ‘fast’ partition, equipped with 1 GPU.

-
-

First, let’s verify that you are indeed on a node equipped with a functional -GPU, by typing nvidia-smi:

-
$ nvidia-smi
-Wed Sep 27 10:34:35 2023
-+-----------------------------------------------------------------------------+
-| NVIDIA-SMI 525.125.06   Driver Version: 525.125.06   CUDA Version: 12.0     |
-|-------------------------------+----------------------+----------------------+
-| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
-| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
-|                               |                      |               MIG M. |
-|===============================+======================+======================|
-|   0  NVIDIA GeForce ...  Off  | 00000000:41:00.0 Off |                  N/A |
-|  0%   42C    P8    22W / 240W |      1MiB /  8192MiB |      0%      Default |
-|                               |                      |                  N/A |
-+-------------------------------+----------------------+----------------------+
-
-+-----------------------------------------------------------------------------+
-| Processes:                                                                  |
-|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
-|        ID   ID                                                   Usage      |
-|=============================================================================|
-|  No running processes found                                                 |
-+-----------------------------------------------------------------------------+
-
-
-

Your output should look similar to the above. You will be able to see the GPU -name, temperature, memory usage, etc. If you see an error message instead, -(even though you are on a GPU node) please contact the SWC Scientific Computing team.

-

Next, load the SLEAP module.

-
$ module load SLEAP
-Loading SLEAP/2023-08-01
-  Loading requirement: cuda/11.8
-
-
-

To verify that the module was loaded successfully:

-
$ module list
-Currently Loaded Modulefiles:
- 1) SLEAP/2023-08-01
-
-
-

You can essentially think of the module as a centrally installed conda environment. -When it is loaded, you should be using a particular Python executable. -You can verify this by running:

-
$ which python
-/ceph/apps/ubuntu-20/packages/SLEAP/2023-08-01/bin/python
-
-
-

Finally we will verify that the sleap python package can be imported and can -‘see’ the GPU. We will mostly just follow the -relevant SLEAP instructions. -First, start a Python interpreter:

-
$ python
-
-
-

Next, run the following Python commands:

-
-

Warning

-

The import sleap command may take some time to run (more than a minute). -This is normal. Subsequent imports should be faster.

-
-
>>> import sleap
-
->>> sleap.versions()
-SLEAP: 1.3.1
-TensorFlow: 2.8.4
-Numpy: 1.21.6
-Python: 3.7.12
-OS: Linux-5.4.0-109-generic-x86_64-with-debian-bullseye-sid
-
->>> sleap.system_summary()
-GPUs: 1/1 available
-  Device: /physical_device:GPU:0
-         Available: True
-        Initialized: False
-     Memory growth: None
-
->>> import tensorflow as tf
-
->>> print(tf.config.list_physical_devices('GPU'))
-[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
-
->>> tf.constant("Hello world!")
-<tf.Tensor: shape=(), dtype=string, numpy=b'Hello world!'>
-
-
-

If all is as expected, you can exit the Python interpreter, and then exit the GPU node

-
>>> exit()
-
-
-
$ exit()
-
-
-

If you encounter troubles with using the SLEAP module, contact the -Niko Sirmpilatze of the SWC Neuroinformatics Unit.

-

To completely exit the HPC cluster, you will need to logout of the SSH session twice:

-
$ logout
-$ logout
-
-
-

See Set up SSH for the SWC HPC cluster -for more information.

-
-
-
- - -
- - - - - - - -
- - - - - - -
-
- -
- -
-
-
- - - - - - - - \ No newline at end of file diff --git a/data_analysis/index.html b/data_analysis/index.html index d0a0623..5231c93 100644 --- a/data_analysis/index.html +++ b/data_analysis/index.html @@ -29,7 +29,6 @@ - @@ -41,8 +40,6 @@ - - @@ -50,7 +47,7 @@ - + @@ -246,7 +243,7 @@
-
+
@@ -324,17 +321,6 @@
- - @@ -388,9 +374,6 @@

Data Analysis#

Guides related to the analysis of neuroscientific data, spanning a wide range of data types such as electrophysiology, behaviour, calcium imaging, histology, etc. The focus may be on the use of specific software tools, or on more general analysis tasks and concepts.

@@ -414,11 +397,11 @@

Data Analysis

next

-

Use the SLEAP module on the HPC cluster

+

Programming

diff --git a/genindex.html b/genindex.html index f83ce6f..ca9991f 100644 --- a/genindex.html +++ b/genindex.html @@ -28,7 +28,6 @@ - @@ -40,8 +39,6 @@ - - diff --git a/index.html b/index.html index 2c873e7..c452a06 100644 --- a/index.html +++ b/index.html @@ -29,7 +29,6 @@ - @@ -41,8 +40,6 @@ - - diff --git a/objects.inv b/objects.inv index 8626f8e..ea0f3f0 100644 Binary files a/objects.inv and b/objects.inv differ diff --git a/open_science/Data-sharing.html b/open_science/Data-sharing.html index b3f579d..e154967 100644 --- a/open_science/Data-sharing.html +++ b/open_science/Data-sharing.html @@ -29,7 +29,6 @@ - @@ -41,8 +40,6 @@ - - diff --git a/open_science/Licensing.html b/open_science/Licensing.html index 6ef8524..751ee2d 100644 --- a/open_science/Licensing.html +++ b/open_science/Licensing.html @@ -29,7 +29,6 @@ - @@ -41,8 +40,6 @@ - - diff --git a/open_science/index.html b/open_science/index.html index 451962e..52ad4d8 100644 --- a/open_science/index.html +++ b/open_science/index.html @@ -29,7 +29,6 @@ - @@ -41,8 +40,6 @@ - - diff --git a/output.json b/output.json index 4f4abd7..6cd4790 100644 --- a/output.json +++ b/output.json @@ -1,70 +1,52 @@ {"filename": "_static/swc-wiki-warning.md", "lineno": 2, "status": "ignored", "code": 0, "uri": "https://wiki.ucl.ac.uk/display/SI/SWC+Intranet", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 94, "status": "ignored", "code": 0, "uri": "https://wiki.ucl.ac.uk/display/SSC/SWC+Storage+Platform+Overview", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 107, "status": "ignored", "code": 0, "uri": "https://linux.die.net/man/1/rsync", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 75, "status": "ignored", "code": 0, "uri": "https://neuromorpho.org/", "info": ""} {"filename": "open_science/Licensing.md", "lineno": 7, "status": "ignored", "code": 0, "uri": "https://www.uclb.com/", "info": ""} -{"filename": "programming/SLURM-arguments.md", "lineno": 105, "status": "ignored", "code": 0, "uri": "https://wiki.ucl.ac.uk/display/SSC/CPU+and+GPU+Platform+architecture", "info": ""} -{"filename": "programming/SSH-SWC-cluster.md", "lineno": 24, "status": "ignored", "code": 0, "uri": "https://wiki.ucl.ac.uk/display/SSC/High+Performance+Computing", "info": ""} -{"filename": "programming/SSH-SWC-cluster.md", "lineno": 24, "status": "ignored", "code": 0, "uri": "https://wiki.ucl.ac.uk/display/SSC/Logging+into+the+Cluster", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 26, "status": "unchecked", "code": 0, "uri": "#ssh-cluster-target", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 597, "status": "unchecked", "code": 0, "uri": "", "info": ""} +{"filename": "programming/Mount-ceph-ubuntu-temp.md", "lineno": 54, "status": "ignored", "code": 0, "uri": "https://support.zadarastorage.com/hc/en-us/articles/213024986-How-to-Mount-a-SMB-Share-in-Ubuntu", "info": ""} +{"filename": "programming/SSH-SWC-cluster.md", "lineno": 23, "status": "ignored", "code": 0, "uri": "https://wiki.ucl.ac.uk/display/SSC/High+Performance+Computing", "info": ""} +{"filename": "programming/SSH-SWC-cluster.md", "lineno": 23, "status": "ignored", "code": 0, "uri": "https://wiki.ucl.ac.uk/display/SSC/Logging+into+the+Cluster", "info": ""} +{"filename": "index.md", "lineno": 15, "status": "unchecked", "code": 0, "uri": "", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 26, "status": "unchecked", "code": 0, "uri": "#licensing-target", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 238, "status": "unchecked", "code": 0, "uri": "#slurm-arguments-target", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 64, "status": "working", "code": 0, "uri": "https://sleap.ai/installation.html", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 73, "status": "working", "code": 0, "uri": "https://sleap.ai/installation.html#conda-package", "info": ""} -{"filename": "open_science/Data-sharing.md", "lineno": 42, "status": "redirected", "code": 301, "uri": "http://www.brainimagelibrary.org/", "info": "https://www.brainimagelibrary.org/"} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 70, "status": "working", "code": 0, "uri": "https://conda.org/blog/2023-11-06-conda-23-10-0-release/", "info": ""} -{"filename": "open_science/Licensing.md", "lineno": 21, "status": "working", "code": 0, "uri": "https://choosealicense.com/", "info": ""} -{"filename": "programming/SSH-SWC-cluster.md", "lineno": 72, "status": "working", "code": 0, "uri": "https://code.visualstudio.com/", "info": ""} +{"filename": "programming/Mount-ceph-ubuntu-temp.md", "lineno": 3, "status": "unchecked", "code": 0, "uri": "#target-mount-ceph-ubuntu-perm", "info": ""} +{"filename": "programming/Mount-ceph-ubuntu.md", "lineno": 5, "status": "unchecked", "code": 0, "uri": "#target-mount-ceph-ubuntu-temp", "info": ""} {"filename": "programming/Cookiecutter-cruft.md", "lineno": 3, "status": "working", "code": 0, "uri": "https://cruft.github.io/cruft/", "info": ""} +{"filename": "open_science/Licensing.md", "lineno": 21, "status": "working", "code": 0, "uri": "https://choosealicense.com/", "info": ""} +{"filename": "open_science/Data-sharing.md", "lineno": 42, "status": "redirected", "code": 301, "uri": "http://www.brainimagelibrary.org/", "info": "https://www.brainimagelibrary.org/"} {"filename": "open_science/Data-sharing.md", "lineno": 43, "status": "working", "code": 0, "uri": "https://dandiarchive.org/", "info": ""} +{"filename": "programming/SSH-SWC-cluster.md", "lineno": 71, "status": "working", "code": 0, "uri": "https://code.visualstudio.com/", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 28, "status": "working", "code": 0, "uri": "https://docs.github.com/en/repositories/archiving-a-github-repository/referencing-and-citing-content", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 73, "status": "working", "code": 0, "uri": "https://crcns.org/", "info": ""} +{"filename": "open_science/Data-sharing.md", "lineno": 76, "status": "working", "code": 0, "uri": "http://ccdb.ucsd.edu/home", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 63, "status": "redirected", "code": 301, "uri": "https://datadryad.org", "info": "https://datadryad.org/stash"} {"filename": "open_science/Data-sharing.md", "lineno": 64, "status": "working", "code": 0, "uri": "https://gin.g-node.org/", "info": ""} -{"filename": "programming/SSH-SWC-cluster.md", "lineno": 34, "status": "working", "code": 0, "uri": "https://gitforwindows.org/", "info": ""} -{"filename": "open_science/Data-sharing.md", "lineno": 74, "status": "working", "code": 0, "uri": "https://gin.g-node.org/BrainGlobe/atlases", "info": ""} -{"filename": "open_science/Data-sharing.md", "lineno": 65, "status": "redirected", "code": 301, "uri": "https://ebrains.eu/service/share-data", "info": "https://www.ebrains.eu/data/share-data/"} {"filename": "open_science/Data-sharing.md", "lineno": 37, "status": "working", "code": 0, "uri": "https://gin.g-node.org/SainsburyWellcomeCentre", "info": ""} +{"filename": "programming/SSH-SWC-cluster.md", "lineno": 33, "status": "working", "code": 0, "uri": "https://gitforwindows.org/", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 26, "status": "working", "code": 0, "uri": "https://github.com/SainsburyWellcomeCentre", "info": ""} {"filename": "programming/Cookiecutter-cruft.md", "lineno": 3, "status": "working", "code": 0, "uri": "https://github.com/cruft/cruft", "info": ""} +{"filename": "open_science/Data-sharing.md", "lineno": 74, "status": "working", "code": 0, "uri": "https://gin.g-node.org/BrainGlobe/atlases", "info": ""} +{"filename": "programming/SSH-SWC-cluster.md", "lineno": 265, "status": "working", "code": 0, "uri": "https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 71, "status": "working", "code": 0, "uri": "https://nemoarchive.org/", "info": ""} -{"filename": "programming/SSH-SWC-cluster.md", "lineno": 266, "status": "working", "code": 0, "uri": "https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 49, "status": "working", "code": 0, "uri": "https://neuroblueprint.neuroinformatics.dev/", "info": ""} {"filename": "index.md", "lineno": 5, "status": "working", "code": 0, "uri": "https://neuroinformatics.dev", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 589, "status": "working", "code": 0, "uri": "https://neuroinformatics.dev/", "info": ""} -{"filename": "open_science/Data-sharing.md", "lineno": 72, "status": "working", "code": 0, "uri": "https://openneuro.org/", "info": ""} {"filename": "open_science/Licensing.md", "lineno": 51, "status": "working", "code": 0, "uri": "https://joinup.ec.europa.eu/collection/eupl/solution/joinup-licensing-assistant/jla-compatibility-checker", "info": ""} -{"filename": "open_science/Licensing.md", "lineno": 13, "status": "redirected", "code": 301, "uri": "https://opensource.org/licenses/BSD-3-Clause", "info": "https://opensource.org/license/bsd-3-clause/"} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 177, "status": "working", "code": 0, "uri": "https://sleap.ai/_images/topdown_approach.jpg", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 131, "status": "working", "code": 0, "uri": "https://sleap.ai/guides/choosing-models.html#", "info": ""} -{"filename": "open_science/Data-sharing.md", "lineno": 34, "status": "working", "code": 0, "uri": "https://rdr.ucl.ac.uk/The_Sainsbury_Wellcome_Centre", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 459, "status": "working", "code": 0, "uri": "https://sleap.ai/guides/merging.html", "info": ""} +{"filename": "open_science/Data-sharing.md", "lineno": 65, "status": "redirected", "code": 301, "uri": "https://ebrains.eu/service/share-data", "info": "https://www.ebrains.eu/data/share-data/"} +{"filename": "open_science/Licensing.md", "lineno": 7, "status": "working", "code": 0, "uri": "https://neuroinformatics.dev/", "info": ""} +{"filename": "open_science/Data-sharing.md", "lineno": 72, "status": "working", "code": 0, "uri": "https://openneuro.org/", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 77, "status": "redirected", "code": 301, "uri": "https://senselab.med.yale.edu/ModelDB/", "info": "https://modeldb.science/"} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 129, "status": "working", "code": 0, "uri": "https://sleap.ai/guides/remote.html#remote-training", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 542, "status": "working", "code": 0, "uri": "https://sleap.ai/installation.html#testing-that-things-are-working", "info": ""} -{"filename": "open_science/Data-sharing.md", "lineno": 76, "status": "working", "code": 0, "uri": "http://ccdb.ucsd.edu/home", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 131, "status": "working", "code": 0, "uri": "https://sleap.ai/guides/troubleshooting-workflows.html", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 458, "status": "working", "code": 0, "uri": "https://sleap.ai/tutorials/assisted-labeling.html", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 440, "status": "working", "code": 0, "uri": "https://sleap.ai/guides/proofreading.html#tracking-method-details", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 124, "status": "working", "code": 0, "uri": "https://sleap.ai/tutorials/new-project.html", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 169, "status": "working", "code": 0, "uri": "https://sleap.ai/tutorials/initial-training.html#training-options", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 124, "status": "working", "code": 0, "uri": "https://sleap.ai/tutorials/initial-labeling.html", "info": ""} +{"filename": "open_science/Data-sharing.md", "lineno": 34, "status": "working", "code": 0, "uri": "https://rdr.ucl.ac.uk/The_Sainsbury_Wellcome_Centre", "info": ""} {"filename": "programming/Troubleshooting.md", "lineno": 16, "status": "working", "code": 0, "uri": "https://support.github.com/contact?tags=rr-forks&subject=Detach%20Fork&flow=detach_fork", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 288, "status": "working", "code": 0, "uri": "https://slurm.schedmd.com/squeue.html", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 299, "status": "working", "code": 0, "uri": "https://slurm.schedmd.com/sacct.html", "info": ""} -{"filename": "data_analysis/HPC-module-SLEAP.md", "lineno": 238, "status": "working", "code": 0, "uri": "https://slurm.schedmd.com/sbatch.html", "info": ""} +{"filename": "open_science/Licensing.md", "lineno": 13, "status": "redirected", "code": 301, "uri": "https://opensource.org/licenses/BSD-3-Clause", "info": "https://opensource.org/license/bsd-3-clause/"} {"filename": "open_science/Licensing.md", "lineno": 21, "status": "redirected", "code": 301, "uri": "https://tldrlegal.com/", "info": "https://www.tldrlegal.com/"} +{"filename": "programming/SSH-SWC-cluster.md", "lineno": 272, "status": "working", "code": 0, "uri": "https://www.jetbrains.com/help/pycharm/remote-development-overview.html", "info": ""} +{"filename": "programming/SSH-SWC-cluster.md", "lineno": 272, "status": "working", "code": 0, "uri": "https://www.jetbrains.com/pycharm/", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 28, "status": "working", "code": 0, "uri": "https://www.doi.org/", "info": ""} -{"filename": "open_science/Data-sharing.md", "lineno": 68, "status": "working", "code": 0, "uri": "https://idr.openmicroscopy.org/", "info": ""} -{"filename": "programming/SSH-SWC-cluster.md", "lineno": 273, "status": "working", "code": 0, "uri": "https://www.jetbrains.com/help/pycharm/remote-development-overview.html", "info": ""} -{"filename": "programming/SSH-SWC-cluster.md", "lineno": 273, "status": "working", "code": 0, "uri": "https://www.jetbrains.com/pycharm/", "info": ""} -{"filename": "open_science/Licensing.md", "lineno": 54, "status": "working", "code": 0, "uri": "https://www.imperial.ac.uk/research-and-innovation/support-for-staff/scholarly-communication/research-data-management/sharing-data/research-software/", "info": ""} -{"filename": "open_science/Data-sharing.md", "lineno": 44, "status": "working", "code": 0, "uri": "https://www.ebi.ac.uk/bioimage-archive/", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 49, "status": "working", "code": 0, "uri": "https://www.nwb.org/", "info": ""} +{"filename": "open_science/Data-sharing.md", "lineno": 44, "status": "working", "code": 0, "uri": "https://www.ebi.ac.uk/bioimage-archive/", "info": ""} +{"filename": "open_science/Licensing.md", "lineno": 54, "status": "working", "code": 0, "uri": "https://www.imperial.ac.uk/research-and-innovation/support-for-staff/scholarly-communication/research-data-management/sharing-data/research-software/", "info": ""} +{"filename": "open_science/Licensing.md", "lineno": 7, "status": "working", "code": 0, "uri": "https://www.ucl.ac.uk/", "info": ""} +{"filename": "index.md", "lineno": 5, "status": "working", "code": 0, "uri": "https://www.sainsburywellcome.org/web/", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 66, "status": "working", "code": 0, "uri": "https://www.v2.opensourcebrain.org/", "info": ""} {"filename": "open_science/Data-sharing.md", "lineno": 54, "status": "unchecked", "code": 0, "uri": "mailto:adam.tyson@ucl.ac.uk", "info": ""} -{"filename": "index.md", "lineno": 5, "status": "working", "code": 0, "uri": "https://www.sainsburywellcome.org/web/", "info": ""} -{"filename": "open_science/Licensing.md", "lineno": 7, "status": "working", "code": 0, "uri": "https://www.ucl.ac.uk/", "info": ""} -{"filename": "open_science/Data-sharing.md", "lineno": 62, "status": "working", "code": 0, "uri": "https://zenodo.org/", "info": ""} +{"filename": "programming/Mount-ceph-ubuntu-temp.md", "lineno": 53, "status": "working", "code": 0, "uri": "https://www.rz.uni-kiel.de/en/hints-howtos/connecting-a-network-share/Linux/through-temporary-permanent-mounting/mount-network-drive-under-linux-temporarily-permanently", "info": ""} {"filename": "index.md", "lineno": 5, "status": "working", "code": 0, "uri": "https://www.ucl.ac.uk/gatsby/gatsby-computational-neuroscience-unit", "info": ""} +{"filename": "open_science/Data-sharing.md", "lineno": 62, "status": "working", "code": 0, "uri": "https://zenodo.org/", "info": ""} +{"filename": "open_science/Data-sharing.md", "lineno": 68, "status": "working", "code": 0, "uri": "https://idr.openmicroscopy.org/", "info": ""} diff --git a/output.txt b/output.txt index 43f0f40..62d367d 100644 --- a/output.txt +++ b/output.txt @@ -1,6 +1,6 @@ open_science/Data-sharing.md:42: [redirected permanently] http://www.brainimagelibrary.org/ to https://www.brainimagelibrary.org/ open_science/Data-sharing.md:63: [redirected permanently] https://datadryad.org to https://datadryad.org/stash open_science/Data-sharing.md:65: [redirected permanently] https://ebrains.eu/service/share-data to https://www.ebrains.eu/data/share-data/ -open_science/Licensing.md:13: [redirected permanently] https://opensource.org/licenses/BSD-3-Clause to https://opensource.org/license/bsd-3-clause/ open_science/Data-sharing.md:77: [redirected permanently] https://senselab.med.yale.edu/ModelDB/ to https://modeldb.science/ +open_science/Licensing.md:13: [redirected permanently] https://opensource.org/licenses/BSD-3-Clause to https://opensource.org/license/bsd-3-clause/ open_science/Licensing.md:21: [redirected permanently] https://tldrlegal.com/ to https://www.tldrlegal.com/ diff --git a/programming/Cookiecutter-cruft.html b/programming/Cookiecutter-cruft.html index ae3df7f..d5f081a 100644 --- a/programming/Cookiecutter-cruft.html +++ b/programming/Cookiecutter-cruft.html @@ -29,7 +29,6 @@ - @@ -41,8 +40,6 @@ - - @@ -51,7 +48,7 @@ - + @@ -333,10 +330,10 @@ aria-label="Section Navigation">