This project adapts the SAM2 to incorporate functionalities from comfyui_segment_anything. Many thanks to continue-revolution for their foundational work.
You can refer to this example workflow for a quickly try.
Install the necessary Python dependencies with:
pip install -r requirements.txt
Models will be automatically downloaded when needed. Alternatively, you can download them manually as per the instructions below. If the download is slow, set the HTTP_PROXY
and HTTPS_PROXY
environment variables to use a proxy.
Download the model from Hugging Face and place the files in the models/bert-base-uncased
directory under ComfyUI.
Download the models and config files to models/grounding-dino
under the ComfyUI root directory. Do not modify the file names.
Name | Size | Config File | Model File |
---|---|---|---|
GroundingDINO_SwinT_OGC | 694MB | download link | download link |
GroundingDINO_SwinB | 938MB | download link | download link |
Download the model files to models/sams
under the ComfyUI root directory. Do not modify the file names.
Model | Size | Model File |
---|---|---|
sam2_hiera_tiny | 38.9MB | download link |
sam2_hiera_small | 46MB | download link |
sam2_hiera_base_plus | 80.8MB | download link |
sam2_hiera_large | 224.4MB | download link |
Thank you for considering contributions! Fork the repository, make changes, and send a pull request for review and merging.
The fastest way to run Meta's SAM 2 (Segment Anything Model 2)
If you use SAM 2 or the SA-V dataset in your research, cite the following:
@article{ravi2024sam2,
title={SAM 2: Segment Anything in Images and Videos},
author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph},
journal={arXiv preprint},
year={2024}
}