8 PAPERS WERE ACCEPTED TO CVPR 2023

Yan Bin’s paper:Universal Instance Perception as Object Discovery and Retrieval,[code]( https://github.com/MasterBin-IIAU/UNINEXT )

[PDF]( https://arxiv.org/abs/2303.06674 ).

Zhu Jia-Wen’s paper:Visual Prompt Multi-Modal Tracking,

[Code]: https://github.com/jiawen-zhu/ViPT

[PDF]: https://arxiv.org/abs/2303.10826

SeqTrack: Sequence to Sequence Learning for Visual Object Tracking,

[Code]: https://github.com/microsoft/VideoX

[PDF]: https://openaccess.thecvf.com/content/CVPR2023/papers/Chen_SeqTrack_Sequence_to_Sequence_Learning_for_Visual_Object_Tracking_CVPR_2023_paper.pdf

Chen Jian-Chuan’s paper:GM-NeRF: Learning Generalizable Model-Based Neural Radiance Fields From Multi-View Images,

[Code]: https://janaldochen.github.io/GM-NeRF

[PDF]: https://openaccess.thecvf.com/content/CVPR2023/html/Chen_GM-NeRF_Learning_Generalizable_Model-Based_Neural_Radiance_Fields_From_Multi-View_Images_CVPR_2023_paper.html

Wang Yingwei’s paper: Compression-Aware Video Super-Resolution,[PDF]: https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_Compression-Aware_Video_Super-Resolution_CVPR_2023_paper.pdf

Zhao Hao-Jie’s papers: ARKitTrack: A New Diverse Dataset for Tracking Using Mobile RGB-D Data 和 Representation Learning for Visual Object Tracking by Masked Appearance Transfer [Code]( https://github.com/difhnp/MAT )

[PDF]( https://github.com/difhnp/MAT/blob/main/misc/CVPR_23_MAT_Final.pdf )

Prof.Zhao Wen-Da’s paper: MetaFusion: Infrared and Visible Image Fusion via Meta-Feature Embedding From Object Detection,[PDF]: https://openaccess.thecvf.com/content/CVPR2023/html/Zhao_MetaFusion_Infrared_and_Visible_Image_Fusion_via_Meta-Feature_Embedding_From_CVPR_2023_paper.html

Congratulations to the above students and teachers!