From ec0858f1aeb9e0f6221aa366c09159f8fcf5400c Mon Sep 17 00:00:00 2001 From: Gianni De Fabritiis Date: Mon, 3 Oct 2022 08:53:38 -0400 Subject: [PATCH 1/2] change obscure naming --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index ba1c97d45..803fdabd9 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,7 @@ url={https://openreview.net/forum?id=zNHzqZ9wrRB} Specifying training arguments can either be done via a configuration yaml file or through command line arguments directly. An example configuration file for a TorchMD Graph Network can be found in [examples/](https://github.com/compsciencelab/torchmd-net/blob/main/examples). For an example on how to train the network on the QM9 dataset, see [examples/](https://github.com/compsciencelab/torchmd-net/blob/main/examples). GPUs can be selected by their index by listing the device IDs (coming from `nvidia-smi`) in the `CUDA_VISIBLE_DEVICES` environment variable. Otherwise, the argument `--ngpus` can be used to select the number of GPUs to train on (-1 uses all available GPUs or the ones specified in `CUDA_VISIBLE_DEVICES`). ``` mkdir output -CUDA_VISIBLE_DEVICES=0 tmn-train --conf torchmd-net/examples/ET-QM9.yaml --log-dir output/ +CUDA_VISIBLE_DEVICES=0 torchmd-train --conf torchmd-net/examples/ET-QM9.yaml --log-dir output/ ``` ## Pretrained models @@ -60,7 +60,7 @@ As an example, have a look at `torchmdnet.priors.Atomref`. ## Multi-Node Training -In order to train models on multiple nodes some environment variables have to be set, which provide all necessary information to PyTorch Lightning. In the following we provide an example bash script to start training on two machines with two GPUs each. The script has to be started once on each node. Once `tmn-train` is started on all nodes, a network connection between the nodes will be established using NCCL. +In order to train models on multiple nodes some environment variables have to be set, which provide all necessary information to PyTorch Lightning. In the following we provide an example bash script to start training on two machines with two GPUs each. The script has to be started once on each node. Once `torchmd-train` is started on all nodes, a network connection between the nodes will be established using NCCL. In addition to the environment variables the argument `--num-nodes` has to be specified with the number of nodes involved during training. @@ -70,7 +70,7 @@ export MASTER_ADDR=hostname1 export MASTER_PORT=12910 mkdir -p output -CUDA_VISIBLE_DEVICES=0,1 tmn-train --conf torchmd-net/examples/ET-QM9.yaml.yaml --num-nodes 2 --log-dir output/ +CUDA_VISIBLE_DEVICES=0,1 torchmd-train --conf torchmd-net/examples/ET-QM9.yaml.yaml --num-nodes 2 --log-dir output/ ``` - `NODE_RANK` : Integer indicating the node index. Must be `0` for the main node and incremented by one for each additional node. From f3bae73ad75b2f8d6a76c7e9fe084fe3b9b489c2 Mon Sep 17 00:00:00 2001 From: Gianni De Fabritiis Date: Mon, 3 Oct 2022 08:54:06 -0400 Subject: [PATCH 2/2] Change obscure name --- examples/README.md | 2 +- setup.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/examples/README.md b/examples/README.md index 0ff9e1c2d..e8e2f20ba 100644 --- a/examples/README.md +++ b/examples/README.md @@ -3,7 +3,7 @@ ## Training We provide three example config files for the ET for training on QM9, MD17 and ANI1 respectively. To train on a QM9 target other than `energy_U0`, change the parameter `dataset_arg` in the QM9 config file. Changing the MD17 molecule to train on works analogously. To train an ET from scratch you can use the following code from the torchmd-net directory: ```bash -CUDA_VISIBLE_DEVICES=0,1 tmn-train --conf examples/ET-{QM9,MD17,ANI1}.yaml +CUDA_VISIBLE_DEVICES=0,1 torchmd-train --conf examples/ET-{QM9,MD17,ANI1}.yaml ``` Use the `CUDA_VISIBLE_DEVICES` environment variable to select which and how many GPUs you want to train on. The example above selects GPUs with indices 0 and 1. The training code will want to save checkpoints and config files in a directory called `logs/`, which you can change either in the config .yaml file or as an additional command line argument: `--log-dir path/to/log-dir`. diff --git a/setup.py b/setup.py index 8f4e0fc3f..3ace81d3f 100644 --- a/setup.py +++ b/setup.py @@ -15,5 +15,5 @@ name="torchmd-net", version=version, packages=find_packages(), - entry_points={"console_scripts": ["tmn-train = torchmdnet.scripts.train:main"]}, + entry_points={"console_scripts": ["torchmd-train = torchmdnet.scripts.train:main"]}, )