-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #36 from nasa-nccs-hpda/develop
Develop
- Loading branch information
Showing
17 changed files
with
1,040 additions
and
6 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# SatVision-Giant (SwinV2-Giant) | ||
|
||
`sbatch run_satvision_pretrain <path to container> <path to config>` |
29 changes: 29 additions & 0 deletions
29
examples/satvision-giant/mim_pretrain_swinv2_satvision_giant_192_window12_200ep.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
MODEL: | ||
TYPE: swinv2 | ||
NAME: mim_satvision_pretrain-giant | ||
DROP_PATH_RATE: 0.1 | ||
SWINV2: | ||
IN_CHANS: 7 | ||
EMBED_DIM: 512 | ||
DEPTHS: [ 2, 2, 42, 2 ] | ||
NUM_HEADS: [ 4, 8, 16, 32 ] | ||
WINDOW_SIZE: 12 | ||
NORM_PERIOD: 6 | ||
|
||
DATA: | ||
IMG_SIZE: 192 | ||
MASK_PATCH_SIZE: 32 | ||
MASK_RATIO: 0.6 | ||
TRAIN: | ||
EPOCHS: 200 | ||
WARMUP_EPOCHS: 10 | ||
BASE_LR: 1e-4 | ||
WARMUP_LR: 5e-7 | ||
WEIGHT_DECAY: 0.05 | ||
LR_SCHEDULER: | ||
NAME: 'multistep' | ||
GAMMA: 0.1 | ||
MULTISTEPS: [700,] | ||
PRINT_FREQ: 100 | ||
SAVE_FREQ: 5 | ||
TAG: mim_pretrain_swinv2_g_satvision_192_window12__800ep |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,24 @@ | ||
#!/bin/bash | ||
|
||
#SBATCH -J deepspeed-satvision-giant | ||
#SBATCH -t 3-00:00:00 | ||
#SBATCH -G 4 | ||
#SBATCH -N 1 | ||
|
||
module load singularity | ||
|
||
srun -n 1 singularity exec \ | ||
--env PYTHONPATH="$PWD:$PWD/pytorch-caney" \ | ||
--nv -B /lscratch,/explore,/panfs \ | ||
$1 \ | ||
deepspeed \ | ||
pytorch-caney/pytorch_caney/pipelines/pretraining/mim_deepspeed.py \ | ||
--cfg $2 \ | ||
--dataset MODIS \ | ||
--data-paths /explore/nobackup/projects/ilab/data/satvision/pretraining/training_* \ | ||
--batch-size 32 \ | ||
--output . \ | ||
--enable-amp | ||
|
||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
# SatVision-Huge (SwinV2-Huge) | ||
|
||
`sbatch run_satvision_pretrain <path to container> <path to config>` |
29 changes: 29 additions & 0 deletions
29
examples/satvision-huge/mim_pretrain_swinv2_satvision_huge_192_window12_200ep.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
MODEL: | ||
TYPE: swinv2 | ||
NAME: mim_satvision_pretrain-huge | ||
DROP_PATH_RATE: 0.1 | ||
SWINV2: | ||
IN_CHANS: 7 | ||
EMBED_DIM: 352 | ||
DEPTHS: [ 2, 2, 18, 2 ] | ||
NUM_HEADS: [ 4, 8, 16, 32 ] | ||
WINDOW_SIZE: 12 | ||
NORM_PERIOD: 6 | ||
|
||
DATA: | ||
IMG_SIZE: 192 | ||
MASK_PATCH_SIZE: 32 | ||
MASK_RATIO: 0.6 | ||
TRAIN: | ||
EPOCHS: 200 | ||
WARMUP_EPOCHS: 10 | ||
BASE_LR: 1e-4 | ||
WARMUP_LR: 5e-7 | ||
WEIGHT_DECAY: 0.05 | ||
LR_SCHEDULER: | ||
NAME: 'multistep' | ||
GAMMA: 0.1 | ||
MULTISTEPS: [700,] | ||
PRINT_FREQ: 100 | ||
SAVE_FREQ: 5 | ||
TAG: mim_pretrain_swinv2_h_satvision_192_window12__800ep |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
#!/bin/bash | ||
|
||
#SBATCH -J deepspeed-satvision-giant | ||
#SBATCH -t 3-00:00:00 | ||
#SBATCH -G 4 | ||
#SBATCH -N 1 | ||
|
||
module load singularity | ||
|
||
srun -n 1 singularity exec \ | ||
--env PYTHONPATH="$PWD:$PWD/pytorch-caney" \ | ||
--nv -B /lscratch,/explore,/panfs \ | ||
$1 \ | ||
deepspeed \ | ||
pytorch-caney/pytorch_caney/pipelines/pretraining/mim_deepspeed.py \ | ||
--cfg $2 \ | ||
--dataset MODIS \ | ||
--data-paths /explore/nobackup/projects/ilab/data/satvision/pretraining/training_* \ | ||
--batch-size 32 \ | ||
--output . \ | ||
--enable-amp | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
# SatVision Examples | ||
|
||
The following is an example on how to run SatVision finetune. This is only an example and does not limit other decoder possibilities, or other ways of dealing with the encoder. | ||
|
||
## SatVision Finetune Land Cover Five Class | ||
|
||
The script run_satvision_finetune_lc_fiveclass.sh has an example on how to run the finetuning of a 5 class land cover model using a simple UNet architecture. The dependencies of this model are as follow: | ||
|
||
- finetune.py script (pytorch-caney/pytorch_caney/pipelines/finetuning/finetune.py): this script has the basics for training the finetuning model. Below you will find an example of this script: | ||
|
||
```bash | ||
export PYTHONPATH=$PWD:pytorch-caney | ||
export NGPUS=8 | ||
|
||
torchrun --nproc_per_node $NGPUS \ | ||
pytorch-caney/pytorch_caney/pipelines/finetuning/finetune.py \ | ||
--cfg finetune_satvision_base_landcover5class_192_window12_100ep.yaml \ | ||
--pretrained /explore/nobackup/people/cssprad1/projects/satnet/code/development/masked_image_modeling/development/models/simmim_satnet_pretrain_pretrain/simmim_pretrain__satnet_swinv2_base__img192_window12__800ep_v3_no_norm/ckpt_epoch_800.pth \ | ||
--dataset MODISLC9 \ | ||
--data-paths /explore/nobackup/projects/ilab/data/satvision/finetuning/h18v04/labels_9classes_224 \ | ||
--batch-size 4 \ | ||
--output /explore/nobackup/people/cssprad1/projects/satnet/code/development/cleanup/finetune/models \ | ||
--enable-amp | ||
``` | ||
|
||
From these parameters note that: | ||
|
||
- the pretrained model path is given by --pretrained | ||
- the data paths is given by --data-paths and is expecting a directory whose internal structure is one for images and one from labels, but this can be modified if both input and target files are stored in the same file | ||
- the dataloader is simply called from the script using the --dataset option, which is simply calling build_finetune_dataloaders | ||
from pytorch-caney | ||
|
||
These is simply a guide script on how to run a finetuning pipeline. If you want to get additional insights on how to build other | ||
types of decoders, the build_model function from pytorch_caney/models/build.py has additional details on how to combine the different | ||
encoder and decoders. |
Oops, something went wrong.