Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
  • Loading branch information
Haotian-Zhang committed Oct 31, 2023
1 parent 8f25127 commit 593f92a
Showing 1 changed file with 4 additions and 2 deletions.
6 changes: 4 additions & 2 deletions content/newslist.dat
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
**[10/2023]** One Paper is accepted to WACV 2024: UDA(Empowering Unsupervised Domain Adaptation with Large-scale Pre-trained Vision-Language Models).
**[09/2023]** A summary of my recent papers: (1) a new multimodal LLM that can refer and ground anything anywhere at any granularity [Ferret](https://arxiv.org/abs/2310.07704). (2) using LLM and multimodal LLM for alt-text re-writing to improve CLIP training [veCLIP](https://arxiv.org/abs/2310.07699).
**[10/2022]** Serving as session co-chair for ECCV CVinW Workshop and being responsible for ODinW. Full schedule here: https://computer-vision-in-the-wild.github.io/eccv-2022/.
**[10/2022]** Selected as one of the Young Scholar Award recipients for NeurIPS 2022.
**[09/2022]** One paper accepted by NeurIPS 2022: [GLIPv2](https://arxiv.org/abs/2206.05836). A team effort to push [CVinW](https://computer-vision-in-the-wild.github.io/eccv-2022/)
**[09/2022]** One paper accepted to NeurIPS 2022: [GLIPv2](https://arxiv.org/abs/2206.05836). A team effort to push [CVinW](https://computer-vision-in-the-wild.github.io/eccv-2022/)
**[08/2022]** Updated GLIP [Hugging Face Gradio Demo](https://huggingface.co/spaces/haotiz/glip-zeroshot-demo)! Feel free to check it out!!!
**[09/2022]** Organizing ECCV Workshop [*Computer Vision in the Wild (CVinW)*](https://computer-vision-in-the-wild.github.io/eccv-2022/), where two challenges [*Image Classification in the Wild (ICinW)*](https://eval.ai/web/challenges/challenge-page/1832/overview) and [*Object Detection in the Wild (ODinW)*](https://eval.ai/web/challenges/challenge-page/1839/overview) are hosted to evaluate the zero-shot, few-shot and full-shot performance of pre-trained vision models.
**[03/2022]** One paper accepted by CVPR 2022: [GLIP](https://arxiv.org/abs/2112.03857) as an Oral & Best Paper Finalist.
**[03/2022]** One paper accepted to CVPR 2022: [GLIP](https://arxiv.org/abs/2112.03857) as an Oral & Best Paper Finalist.
**[10/2021]** I am the 🏆winner of Video Track (in both MOTChallenge-STEP and KITTI-STEP dataset) in the <a href="https://motchallenge.net/workshops/bmtt2021/">6th BMTT Challenge</a> (in conjunction with ICCV 2021)!
**[06/2020]** Our team is the winner of track 3 (multi-object tracking and segmentation in KITTI-MOTS and MOTS20 dataset with public detection) and the runner-up of track 2 (multi-object detection, tracking and segmentation in KITTI-MOTS dataset) in the 5th BMTT Challenge</a> in CVPR 2020 workshop. <a href="https://ipl-uw.github.io/news/2020-06-11-cvpr20-bmtt.html">[Details...]</a>

0 comments on commit 593f92a

Please sign in to comment.