Skip to content

Latest commit

 

History

History
41 lines (35 loc) · 1.34 KB

INSTALL.md

File metadata and controls

41 lines (35 loc) · 1.34 KB

Setup CUDA drivers and PyTorch on GCP

Launch a new instance configured with Ubuntu 22.04 LTS and a GPU, clone this repository, and run the following:

Install CUDA

sudo ./install_cuda.sh
sudo reboot
# verify
nvidia-smi

Install R

Required for computing q-values. Follow instructions here, then install the 'qvalue' package with

if (!require("BiocManager", quietly = TRUE))
    install.packages("BiocManager")
BiocManager::install("qvalue")

Install Python 3

Using a conda environment is recommended. The tensorqtl_env.yml configuration contains all required packages, including torch and tensorqtl.

mamba env create -f tensorqtl_env.yml
conda activate tensorqtl

# verify
python -c "import torch; print(torch.__version__); print('CUDA available: {} ({})'.format(torch.cuda.is_available(), torch.cuda.get_device_name(torch.cuda.current_device())))"

# this should print something like
# 2.1.2+cu121
# CUDA available: True (Tesla P100-PCIE-16GB)

Install rmate (optional)

sudo apt install -y ruby
mkdir ~/bin
curl -Lo ~/bin/rmate https://raw.githubusercontent.com/textmate/rmate/master/bin/rmate
chmod a+x ~/bin/rmate
echo 'export RMATE_PORT=${rmate_port}' >> ~/.bashrc