The Segment Anything Model (SAM) has revolutionized computer vision. Relying on fine-tuning of SAM will solve a large number of basic computer vision tasks. We are designing a class-aware one-stage tool for training fine-tuning models based on SAM.
You need to supply the datasets for your tasks and the supported task name, this tool will help you to get a finetuned model for your task. You are also allowed to design your own extend-SAM model, and FA supply the training, testing and deploy process for you.
Finetune-Anything further encapsulates the three parts of the original SAM, i.e., Image Encoder Adapter, Prompt Encoder Adapter, and Mask Decoder Adatper. We will support the base extend-SAM model for each task. Users also could design your own customized modules in each adapter, use FA to design different adapters, and set whether the parameters of any module are fixed. For modules with unfixed parameters, parameters such as lr
, weight decay
can be set to coordinate with the fine-tuning of the model.
check details in How_to_use.
For example, MaskDecoder is encapsulated as MaskDecoderAdapter. The current MaskDecoderAdatper contains two parts, DecoderNeck and DecoderHead.
- Semantic Segmentation
- train
- eval
- test
- Matting
- Instance Segmentation
- Detection
- TorchVOCSegmentation
- BaseSemantic
- BaseInstance
- BaseMatting
- Onnx export
FA will be updated in the following order,
- Mattng (task)
- Prompt Part (structure)
- MobileSAM (model)
- Instance Segmentation (task)
finetune-anything(FA) supports the entire training process of SAM model fine-tuning, including the modification of the model structure, as well as the model training, verification, and testing processes. For details, check the How_to_use, the Quick Start gives an example of quickly using FA to train a custom semantic segmentation model.
- Step1
git clone https://github.com/ziqi-jin/finetune-anything.git
cd finetune-anything
pip install -r requirements.txt
-
Step2 Download the SAM weights from SAM repository
-
Step3 Modify the contents of yaml file for the specific task in /config, e.g., ckpt_path, model_type ...
CUDA_VISIBLE_DEVICES=${your GPU number} python train.py --task_name semantic_seg
If you need to use loss, dataset, or other functions that are not supported by FA, please submit an issue, and I will help you to implement them. At the same time, developers are also welcome to develop new loss, dataset or other new functions for FA, please submit your PR (pull requests).