Skip to content

5. MinKNOW and RAMPART

Marit Hetland edited this page Feb 18, 2021 · 9 revisions

Specify basecalling and demultiplexing in MinKNOW

  • Basecalling: High accuracy mode (HAC)
  • Demultiplexing:
    • Make sure the correct barcodes for your run has been chosen
    • "require_barcodes_both_ends" = ON.
  • No other parameters should be changed from their default settings.

Reason: With the current ARTIC protocol it is essential to demultiplex using strict parameters to ensure barcodes are present at each end of the fragment (https://artic.network/ncov-2019/ncov2019-bioinformatics-sop.html).

Install and run RAMPART for real time overview

Detailed information for how to install and use RAMPART with Docker can be found here: https://hub.docker.com/r/ontresearch/artic_rampart

To run RAMPART on the GridION as sequencing is ongoing, open a terminal, navigate to your run directory (where the fast5_pass and fastq_pass folders are) and type:

docker pull ontresearch/artic_rampart:latest #This will pull the latest image (version) of the program
docker run -it -e LOCAL_USER_ID=`id -u $USER` --mount type=bind,source="$(pwd)",target=/data -p 3000:3000 -p 3001:3001 ontresearch/artic_rampart:latest

If you are using a proxy, you have to also configure this with:

sudo mkdir -p /etc/systemd/system/docker.service.d
sudo nano /etc/systemd/system/docker.service.d/http-proxy.conf
#Add the following lines (with correct IP and port)
[Service]
Environment="HTTP_PROXY=http://167.8.123.167:3225"
Environment="HTTPS_PROXY=http://167.8.123.167:3225"
#Reset the docker daemon 
sudo systemctl daemon-reload
sudo systemctl restart docker

See also: https://docs.docker.com/config/daemon/systemd/

RAMPART error

If you get this error,

docker: Error response from daemon: driver failed programming external connectivity on endpoint blissful_booth (*): Bind for 0.0.0.0:3001 failed: port is already allocated.

Try the following:

  1. Close your browser (e.g. Firefox)
  2. In the terminal type docker container ls
  3. You will get a list where you can see CONTAINER ID for IMAGE ontresearch/artic_rampart:latest, e.g. 12a32e8929ef
  4. Run: docker stop <CONTAINER ID>, e.g. docker stop 12a32e8929ef
  5. Rerun the docker command for running RAMPART

Copy files from ONT device to External Hard Drive

Copying files from the GridiON or MinIT to an External Hard Drive can take a long time. To speed this up, we do the following:

  1. Set your SOURCE and DEST directories:
SOURCE=/path/to/<run_name> 
DEST=/path/to/destination/<run_name>
  1. tar zip fast5_pass and fastq_pass directories in chunks and copy them to the disk
#On your ONT device, cd to the <run_name> directory where the fast5_pass and fastq_pass directories are:
cd $SOURCE
mkdir backup ; cd backup
#In separate terminal windows, run:
tar -cvzf - ../fast5_pass/barcode0[0-9]/ | split --bytes=500MB - fast5_pass_barcode0.backup.tar.gz. ; cp fast5_pass_barcode0.backup.tar.gz.* ${DEST}/
tar -cvzf - ../fast5_pass/barcode1[0-9]/ | split --bytes=500MB - fast5_pass_barcode1.backup.tar.gz. ; cp fast5_pass_barcode1.backup.tar.gz.* ${DEST}/
tar -cvzf - ../fast5_pass/barcode2[0-9]/ | split --bytes=500MB - fast5_pass_barcode2.backup.tar.gz. ; cp fast5_pass_barcode2.backup.tar.gz.* ${DEST}/ 
tar -cvzf - ../fastq_pass/ | split --bytes=500MB - fastq_pass.backup.tar.gz.
#Include more lines if you have more than 24 barcodes

cp ../*.txt ../*.md ../*.pdf ../*.csv ../*.tsv ${DEST}/
  1. Check that the md5sums of the files have not changed during transfer.
find $SOURCE -type f -exec md5sum {} \; | tee source.md5
find $DEST -type f -exec md5sum {} \; | tee dest.md5
diff  <(sort source.md5 | cut -d" " -f1) <(sort dest.md5 | cut -d" " -f1) #There should be no output
  1. Copy the files to the computer where you are running the pipeline
  2. Unpack the tar files
cat fast5_pass_barcodes0.backup.tar.gz.* | tar xzvf -
cat fast5_pass_barcodes1.backup.tar.gz.* | tar xzvf -
cat fast5_pass_barcodes2.backup.tar.gz.* | tar xzvf -
cat fastq_pass.backup.tar.gz.* | tar xzvf -
  1. Check (as in step 2) that the files were not changed during transfer
  2. Now you can run the pipeline :)

Please let us know if you have a quicker/better way to do this!