Skip to content
View praeclarumjj3's full-sized avatar
🎗️
Figuring out LLMs for Vision
🎗️
Figuring out LLMs for Vision

Highlights

  • Pro

Organizations

@mdgspace @vlgiitr @SHI-Labs

Block or report praeclarumjj3

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. SHI-Labs/OneFormer SHI-Labs/OneFormer Public

    [CVPR 2023] OneFormer: One Transformer to Rule Universal Image Segmentation

    Jupyter Notebook 1.5k 135

  2. SHI-Labs/OLA-VLM SHI-Labs/OLA-VLM Public

    OLA-VLM: Elevating Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024

    Python 38 2

  3. SHI-Labs/VCoder SHI-Labs/VCoder Public

    [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models

    Python 268 15

  4. Picsart-AI-Research/SeMask-Segmentation Picsart-AI-Research/SeMask-Segmentation Public

    [NIVT Workshop @ ICCV 2023] SeMask: Semantically Masked Transformers for Semantic Segmentation

    Python 252 37

  5. SHI-Labs/FcF-Inpainting SHI-Labs/FcF-Inpainting Public

    [WACV 2023] Keys to Better Image Inpainting: Structure and Texture Go Hand in Hand

    Jupyter Notebook 178 13