Launch a new instance configured with Ubuntu 22.04 LTS and a GPU, clone this repository, and run the following:
sudo ./install_cuda.sh
sudo reboot
# verify
nvidia-smi
Required for computing q-values. Follow instructions here, then install the 'qvalue' package with
if (!require("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install("qvalue")
Using a conda environment is recommended. The tensorqtl_env.yml
configuration contains all required packages, including torch
and tensorqtl
.
mamba env create -f tensorqtl_env.yml
conda activate tensorqtl
# verify
python -c "import torch; print(torch.__version__); print('CUDA available: {} ({})'.format(torch.cuda.is_available(), torch.cuda.get_device_name(torch.cuda.current_device())))"
# this should print something like
# 2.1.2+cu121
# CUDA available: True (Tesla P100-PCIE-16GB)
sudo apt install -y ruby
mkdir ~/bin
curl -Lo ~/bin/rmate https://raw.githubusercontent.com/textmate/rmate/master/bin/rmate
chmod a+x ~/bin/rmate
echo 'export RMATE_PORT=${rmate_port}' >> ~/.bashrc