Skip to content

Commit

Permalink
Merge pull request #36 from nasa-nccs-hpda/develop
Browse files Browse the repository at this point in the history
Develop
  • Loading branch information
jordancaraballo authored Feb 26, 2024
2 parents 98fa6f3 + 4df51b6 commit f6a5741
Show file tree
Hide file tree
Showing 17 changed files with 1,040 additions and 6 deletions.
3 changes: 3 additions & 0 deletions examples/satvision-giant/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# SatVision-Giant (SwinV2-Giant)

`sbatch run_satvision_pretrain <path to container> <path to config>`
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
MODEL:
TYPE: swinv2
NAME: mim_satvision_pretrain-giant
DROP_PATH_RATE: 0.1
SWINV2:
IN_CHANS: 7
EMBED_DIM: 512
DEPTHS: [ 2, 2, 42, 2 ]
NUM_HEADS: [ 4, 8, 16, 32 ]
WINDOW_SIZE: 12
NORM_PERIOD: 6

DATA:
IMG_SIZE: 192
MASK_PATCH_SIZE: 32
MASK_RATIO: 0.6
TRAIN:
EPOCHS: 200
WARMUP_EPOCHS: 10
BASE_LR: 1e-4
WARMUP_LR: 5e-7
WEIGHT_DECAY: 0.05
LR_SCHEDULER:
NAME: 'multistep'
GAMMA: 0.1
MULTISTEPS: [700,]
PRINT_FREQ: 100
SAVE_FREQ: 5
TAG: mim_pretrain_swinv2_g_satvision_192_window12__800ep
24 changes: 24 additions & 0 deletions examples/satvision-giant/run_satvision_pretrain.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
#!/bin/bash

#SBATCH -J deepspeed-satvision-giant
#SBATCH -t 3-00:00:00
#SBATCH -G 4
#SBATCH -N 1

module load singularity

srun -n 1 singularity exec \
--env PYTHONPATH="$PWD:$PWD/pytorch-caney" \
--nv -B /lscratch,/explore,/panfs \
$1 \
deepspeed \
pytorch-caney/pytorch_caney/pipelines/pretraining/mim_deepspeed.py \
--cfg $2 \
--dataset MODIS \
--data-paths /explore/nobackup/projects/ilab/data/satvision/pretraining/training_* \
--batch-size 32 \
--output . \
--enable-amp



3 changes: 3 additions & 0 deletions examples/satvision-huge/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# SatVision-Huge (SwinV2-Huge)

`sbatch run_satvision_pretrain <path to container> <path to config>`
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
MODEL:
TYPE: swinv2
NAME: mim_satvision_pretrain-huge
DROP_PATH_RATE: 0.1
SWINV2:
IN_CHANS: 7
EMBED_DIM: 352
DEPTHS: [ 2, 2, 18, 2 ]
NUM_HEADS: [ 4, 8, 16, 32 ]
WINDOW_SIZE: 12
NORM_PERIOD: 6

DATA:
IMG_SIZE: 192
MASK_PATCH_SIZE: 32
MASK_RATIO: 0.6
TRAIN:
EPOCHS: 200
WARMUP_EPOCHS: 10
BASE_LR: 1e-4
WARMUP_LR: 5e-7
WEIGHT_DECAY: 0.05
LR_SCHEDULER:
NAME: 'multistep'
GAMMA: 0.1
MULTISTEPS: [700,]
PRINT_FREQ: 100
SAVE_FREQ: 5
TAG: mim_pretrain_swinv2_h_satvision_192_window12__800ep
22 changes: 22 additions & 0 deletions examples/satvision-huge/run_satvision_pretrain.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
#!/bin/bash

#SBATCH -J deepspeed-satvision-giant
#SBATCH -t 3-00:00:00
#SBATCH -G 4
#SBATCH -N 1

module load singularity

srun -n 1 singularity exec \
--env PYTHONPATH="$PWD:$PWD/pytorch-caney" \
--nv -B /lscratch,/explore,/panfs \
$1 \
deepspeed \
pytorch-caney/pytorch_caney/pipelines/pretraining/mim_deepspeed.py \
--cfg $2 \
--dataset MODIS \
--data-paths /explore/nobackup/projects/ilab/data/satvision/pretraining/training_* \
--batch-size 32 \
--output . \
--enable-amp

35 changes: 35 additions & 0 deletions examples/satvision/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# SatVision Examples

The following is an example on how to run SatVision finetune. This is only an example and does not limit other decoder possibilities, or other ways of dealing with the encoder.

## SatVision Finetune Land Cover Five Class

The script run_satvision_finetune_lc_fiveclass.sh has an example on how to run the finetuning of a 5 class land cover model using a simple UNet architecture. The dependencies of this model are as follow:

- finetune.py script (pytorch-caney/pytorch_caney/pipelines/finetuning/finetune.py): this script has the basics for training the finetuning model. Below you will find an example of this script:

```bash
export PYTHONPATH=$PWD:pytorch-caney
export NGPUS=8

torchrun --nproc_per_node $NGPUS \
pytorch-caney/pytorch_caney/pipelines/finetuning/finetune.py \
--cfg finetune_satvision_base_landcover5class_192_window12_100ep.yaml \
--pretrained /explore/nobackup/people/cssprad1/projects/satnet/code/development/masked_image_modeling/development/models/simmim_satnet_pretrain_pretrain/simmim_pretrain__satnet_swinv2_base__img192_window12__800ep_v3_no_norm/ckpt_epoch_800.pth \
--dataset MODISLC9 \
--data-paths /explore/nobackup/projects/ilab/data/satvision/finetuning/h18v04/labels_9classes_224 \
--batch-size 4 \
--output /explore/nobackup/people/cssprad1/projects/satnet/code/development/cleanup/finetune/models \
--enable-amp
```

From these parameters note that:

- the pretrained model path is given by --pretrained
- the data paths is given by --data-paths and is expecting a directory whose internal structure is one for images and one from labels, but this can be modified if both input and target files are stored in the same file
- the dataloader is simply called from the script using the --dataset option, which is simply calling build_finetune_dataloaders
from pytorch-caney

These is simply a guide script on how to run a finetuning pipeline. If you want to get additional insights on how to build other
types of decoders, the build_model function from pytorch_caney/models/build.py has additional details on how to combine the different
encoder and decoders.
Loading

0 comments on commit f6a5741

Please sign in to comment.