Hongyang Li
photo

Hongyang Li

Ph.D Student @ South China University of Technology
Research Intern @ International Digital Economy Academy (IDEA)
Guangdong-HongKong-Macau Greater Bay Area
Shenzhen, China

Email: ftwangyeunglei AT mail dot scut dot edu dot cn

Google Scholar | LinkedIn | Github

Hi! This is Hongyang Li, 李弘洋 in Chinese. I’m a second-year year Ph.D. student (2022-now) at the Department of Future Technology, South China University of Technology, supervised by Prof.Lei Zhang. I interned at International Digital Economy Academy. Previously, I obtained my bachelor’s degree from School of Electrical Engineering in South China University of Technology in 2021.

🔖 My research interests lie in Tracking Any Point, 3D Perception and Multi-modal Models.

💬 Feel free to contact me for any discussion and coorperation.


News

  • [2024/12] TAPTRv3 is released! Check out our TAPTRv3 for more details.

  • [2024/9] TAPTRv2 is accepted by NeurIPS2024.

  • [2024/7] TAPTRv2 is released! Check out our TAPTRv2 for more details.

  • [2024/7] Two papers are accepted by ECCV2024! Check out our TAPTR and LLaVA-Grounding for more details.

  • [2024/3] We release TAPTR. Check out project page for more details and online demos.

  • [2023/12] We release LLaVA-Grounding. Demo and inference code are available.

  • [2023/7] Two papers are accepted by ICCV2023. Check out our DFA3D and StableDINO.

  • [2023/2] We release DA-BEV that establishes a new SOTA performance on nuScenes 3D detection leaderboard–camera track. Check out our DA-BEV.

  • [2022/7] A paper is accepted by ECCV2022! Check out our DCL-Net.

  • [2021/9] A paper is accepted by NeurIPS2021! Check out our Sparse Steerable Convolution.


  • Selected Publications

    1. TAPTRv3: Spatial and Temporal Context Foster Robust Tracking of Any Point in Long Video
      Jinyuan Qu*, Hongyang Li*, Shilong Liu, Tianhe Ren, Zhaoyang Zeng, Lei Zhang
      arxiv, 2024
      @article{Qu2024taptrv3,
                  title={{TAPTRv3: Spatial and Temporal Context Foster Robust Tracking of Any Point in Long Video}},
                  author={Qu, Jinyuan and Li, Hongyang and Liu, Shilong and Zeng, Zhaoyang and Ren, Tianhe and Zhang, Lei},
                  journal={arXiv preprint},
                  year={2024}
                }
          
    2. TAPTRv2: Attention-based Position Update Improves Tracking Any Point
      Hongyang Li, Hao Zhang, Shilong Liu, Zhaoyang Zeng, Feng Li, Tianhe Ren, Bohan Li, Lei Zhang
      NeurIPS, 2024
      @article{li2024taptrv2,
                title={TAPTRv2: Attention-based Position Update Improves Tracking Any Point},
                author={Li, Hongyang and Zhang, Hao and Liu, Shilong and Zeng, Zhaoyang and Li, Feng and Ren, Tianhe and Bohan Li and Zhang, Lei},
                journal={arXiv preprint arXiv:2407.16291},
                year={2024}
              }
        
    3. TAPTR: Tracking Any Point with Transformers as Detection
      Hongyang Li, Hao Zhang, Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Lei Zhang
      ECCV, 2024
      @article{li2024taptr,
              title={TAPTR: Tracking Any Point with Transformers as Detection},
              author={Li, Hongyang and Zhang, Hao and Liu, Shilong and Zeng, Zhaoyang and Ren, Tianhe and Li, Feng and Zhang, Lei},
              journal={arXiv preprint arXiv:2403.13042},
              year={2024}
            }
      
    4. LLaVA-Grounding: Grounded Visual Chat with Large Multimodal Models
      Hao Zhang*, Hongyang Li*, Feng Li, Tianhe Ren, Xueyan Zou, Shilong Liu, Shijia Huang, Jianfeng Gao, Lei Zhang, Chunyuan Li, Jianwei Yang
      ECCV, 2024
      @article{zhang2023llava,
            title={Llava-grounding: Grounded visual chat with large multimodal models},
            author={Zhang, Hao and Li, Hongyang and Li, Feng and Ren, Tianhe and Zou, Xueyan and Liu, Shilong and Huang, Shijia and Gao, Jianfeng and Zhang, Lei and Li, Chunyuan and others},
            journal={arXiv preprint arXiv:2312.02949},
            year={2023}
          }
      
    5. DFA3D: 3D Deformable Attention For 2D-to-3D Feature Lifting
      Hongyang Li*, Hao Zhang*, Zhaoyang Zeng, Shilong Liu, Feng Li, Tianhe Ren, Lei Zhang
      ICCV, 2023
      @inproceedings{li2023dfa3d,
            title={DFA3D: 3D Deformable Attention For 2D-to-3D Feature Lifting},
            author={Li, Hongyang and Zhang, Hao and Zeng, Zhaoyang and Liu, Shilong and Li, Feng and Ren, Tianhe and Zhang, Lei},
            booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
            pages={6684--6693},
            year={2023}
          }
      
    6. DA-BEV: Depth Aware BEV Transformer for 3D Object Detection
      Hao Zhang*, Hongyang Li*, Zhaoyang Zeng, Shilong Liu, Feng Li, Tianhe Ren, Lei Zhang
      arxiv, 2023
      @article{zhang2023bev,
            title={Da-bev: Depth aware bev transformer for 3d object detection},
            author={Zhang, Hao and Li, Hongyang and Liao, Xingyu and Li, Feng and Liu, Shilong and Ni, Lionel M and Zhang, Lei},
            journal={arXiv e-prints},
            pages={arXiv--2302},
            year={2023}
          }
      
    7. DCL-Net: Deep Correspondence Learning Network for 6D Pose Estimation
      Hongyang Li*, Jiehong Lin*, Kui Jia
      ECCV, 2022
      @inproceedings{li2022dcl,
            title={DCL-Net: Deep Correspondence Learning Network for 6D Pose Estimation},
            author={Li, Hongyang and Lin, Jiehong and Jia, Kui},
            booktitle={European Conference on Computer Vision},
            pages={369--385},
            year={2022},
            organization={Springer}
          }
      
    8. Sparse Steerable Convolutions: An Efficient Learning of SE(3)-Equivariant Features for Estimation and Tracking of Object Poses in 3D Space
      Jiehong Lin*, Hongyang Li*, Kui Jia
      NeurIPS, 2021
      @article{lin2021sparse,
            title={Sparse steerable convolutions: An efficient learning of se (3)-equivariant features for estimation and tracking of object poses in 3d space},
            author={Lin, Jiehong and Li, Hongyang and Chen, Ke and Lu, Jiangbo and Jia, Kui},
            journal={Advances in Neural Information Processing Systems},
            volume={34},
            pages={16779--16790},
            year={2021}
          }
      

    Awards

    • Principal's Scholarship & National Scholarships in 2023.

    • Principal's Scholarship & National Scholarships in 2022.

    • The First Prize Scholarship * 2 & The Second Prize Scholarship during 2018-2022.