HOME BIOGRAPHY PUBLICATION MEMBER TEACHING CONTACT
Person

Ruimao Zhang

Research Assistant Professor ( Computer Vision, Multimedia, Embodied AI )

School of Data Science , The Chinese University of Hong Kong, Shenzhen (CUHKSZ)

Google Scholar , Semantic Scholar , Twitter , Zhihu

News


The primary objective of our research team is to develop intelligent agents that can effectively collaborate with humans in dynamic environments. To realize this ambition, we focus on three core research directions. (1) Human-centered Visual Content Understanding and Reasoning: This area seeks to enable machines to actively perceive, analyze, and interpret human states, behaviors, and underlying motivations in dynamic scenarios. (2) Omni-modal Scene Perception and Navigation: This emphasizes harnessing diverse sensor modalities to comprehend and navigate complex scenes. (3) Machine Behavior Planning and Decision-making: This direction is centered on equipping intelligent agents with the ability to make real-time decisions based on their comprehension of understanding surroundings.

  • News! We have opening positions for Ph.D., M.phil., Research Assistant and Visiting Student, which are waiting for self-motivated talents. If you are interested in 3D Scene Understanding, Human-centric Visual Perception and Generation, Robot Manipulation, Multi-modal Learning, Neuro-Symbolic Computing, Reinforcement Learning and Embodied Cognition, please drop me an email via ruimao.zhang@ieee.org or zhangruimao@cuhk.edu.cn. More details about the recruitment and undergraduate research programme, please see here.

  • 2024-02-27: Five papers are accepted to CVPR2024. Congrats to all!

  • 2024-01-31: I will serve as an Associate Editor of ACM Trans. on Multimedia Computing, Communications and Applications

  • 2024-01-30: One paper is accepted to ICRA2024. Congrats to Chaoqun and Yiran!

  • 2024-01-16: One paper is accepted to ICLR2024. Congrats to all!

  • 2023-12-09: One paper is accepted to AAAI2024. Congrats to all!

  • 2023-10-20: We present HumanTOMATO, a novel whole-body motion generation framework.

  • 2023-10-15: We present UniPose to detect keypoints of any articulated for fine-grained vision understanding.

  • 2023-09-22: Two papers are accepted to NeurIPS2023. Congrats to all!

  • 2023-09-13: The first large-scale, real-world 3D pose estimation dataset, FreeMan, is released!

  • 2023-07-26: One papers is accepted to ACM MM2023. Congrats to Siyue, Bingliang and Fengyu!

  • 2023-07-14: Two papers are accepted to ICCV2023. Congrats to Jie, Chaoqun and Yiran!

  • 2023-05-25: One paper is early accepted to MICCAI2023. Congrats to all!

  • 2023-03-15: One paper is accepted to Pattern Recognition. Congrats to Qi Liu!

  • 2023-03-02: Two papers are accepted to MIDL2023 and one is rated as the oral presentation. Congrats to Jie Yang and Ye Zhu!

  • 2023-02-28: One paper is accepted to CVPR2023. Congrats to Jie Yang!

  • 2023-02-27: One paper is accepted to T-NNLS. Congrats to Xiaozhe!

  • 2023-01-21: One paper is accepted to ICLR2023. Congrats to Jie Yang!

  • 2022-12-02: One paper is accepted to T-MM. Congrats to Ziyi!

  • 2022-09-17: Two paper are accepted to NeurIPS2022. Congrats to all!

  • 2022-07-05: Two paper are accepted to ECCV2022. Congrats to all!

  • 2022-05-05: One paper is early accepted to MICCAI2022. Congrats to Weijie!

  • 2022-05-01: We associated with MICCAI 2022 to host together MICCAI AMOS Segmentation Challenge 2022.

  • 2021-11-07: One paper is accepted to T-IP. Congrats to Yuying!

  • 2021-10-15: I was selected to receive a NeurIPS 2021 Outstanding Reviewer Award.

  • 2021-07-23: Two papers are accepted to ICCV2021. Congrats to all!

  • 2021-06-12: Two papers are accepted to MICCAI2021. Congrats to all!

  • 2021-05-28: One paper is accepted bto T-MM. Congrats to Zhaoyi!

  • 2021-05-05: A long version of polar representation for object detection is accepted by T-PAMI. Congrats to all!

  • 2021-04-29: One paper is accepted to IJCAI2021. Congrats to Weibing and Yanxu!

  • 2021-03-01: One paper is accepted to CVPR2021. Congrats to Yuying!

  • 2021-02-18: I move to CUHKSZ as a Research Assistant Professor.

  • 2020-12: One paper is accepted to AAAI2021.

  • 2020-08: We won the First Prize in 2020 AIM Challenge on Learned Image Signal Processing Pipeline, Track 2.

  • 2020-07: Two papers are accepted to ECCV2020 and MICCAI2020, respectively.

  • 2020-02: Two papers are accepted to CVPR2020.

  • 2019-08: A long version of SwitchNorm was presented in T-PAMI. Two papers are accepted to ICCV2019.

  • 2019-05: I am organizing the second workshop in Fashion and Art.

  • BIOGRAPHY


    “The weak and ignorance is not a barrier to survive, arrogance is"

    ---《The Three-Body Problem》 Cixin Liu

    “No human nature, people will lose a lot; no bestiality, people will not survive“

    ---《The Three-Body Problem》 Cixin Liu

    Education

  • The Chinese University of Hong Kong, Hong Kong, China. May. 2017 ~ Jul. 2019.
         Postdoctoral Fellow in Multimedia Lab, worked with Prof. Xiaogang Wang ( co-founder of SenseTime ) and Prof. Ping Luo.

  • Sun Yat-sen University, Guangzhou, China. Dec. 2016.
         Ph.D. in Computer Science and Technology, advised by Prof. Liang Lin ( IEEE/IAPR Fellow, Distinguish Young Scholar of NSFC ).

  • Sun Yat-sen University, Guangzhou, China. Jul. 2011.
         B.E. in Software Engineering.

  • Experience

  • The Chinese University of Hong Kong, Shenzhen, China. Feb. 2021 ~ Present,
          Research Assistant Professor, School of Data Science.

  • Shenzhen Research Institute of Big Data, Shenzhen, China. Feb. 2021 ~ Present,
          Research Scientist

  • SenseTime Research, Shenzhen, China. Jul. 2019 ~ Jan. 2021,
          Senior Researcher, report to Prof. Jinwei Gu in SenseBrain, USA.

  • The Hong Kong Polytechnic University, HongKong, China. Aug. 2013 ~ Feb. 2014.
          Visiting Ph.D. Student, advised by Prof. Lei Zhang and Prof. Wangmeng Zuo.

  • Sun Yat-sen University, Guangzhou, China. Dec. 2010 ~ Jul. 2011.
          Research Assistant, advised by Prof. Liang Lin.

  • Awards and Honours

  • Outstanding Reviewer Award of NeurIPS, 2021

  • AIM Challenge on Learned Image Signal Processing Pipeline, Track 2, First Prize, 2020

  • Best Paper Nomination Award of SenseTime Group Ltd., 2020

  • Google Youtube 8M Video Understanding Challenge, Golden Metal (1.5%), 2017

  • National Scholarship for Postgraduate, 2015

  • The National College IOT Innovation Competition, Third Prize, 2012

  • Excellent Student Scholarship of Sun Yat-sen University, 2008 ~ 2010

  • Academic Activity

  • Academic Service:
          Associate Editor, ACM Transactions on Multimedia Computing, Communications and Applications (2024.01~present)
          Executive Area Chair, Vision And Learning SEminar (VALSE), China (2021.07~present)

  • Reviewer for Conferences:
          IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) --- 2019, 2020, 2021, 2022, 2023, 2024
          IEEE International Conference on Computer Vision (ICCV) --- 2019, 2021, 2023
          European Conference on Computer Vision (ECCV) --- 2022, 2024
          Neural Information Processing Systems (NeurIPS) --- 2020, 2021, 2022, 2023
          International Conference on Learning Representations (ICLR) --- 2021, 2022, 2024
          International Conference on Machine Learning (ICML) --- 2022, 2023
          AAAI Conference on Artificial Intelligence (AAAI) --- 2021
          International Conference on Multimedia and Expo (ICME) --- 2014, 2016

  • Reviewer for Journals:
          IEEE Trans. on Pattern Analysis and Machine Intelligence (T-PAMI)
          International Journal of Computer Vision (IJCV)
          IEEE Trans. on Neural Network and Learning System (T-NNLS)
          IEEE Trans. on Image Processing (T-IP)
          IEEE Trans. on Circuits and Systems for Video Technology (T-CSVT)
          IEEE Trans. on Multimedia (T-MM)
          IEEE Trans. on Dependable and Secure Computing (T-DSC)
          IEEE Trans. on Information Forensics and Security (T-IFS)
          ACM Transactions on Multimedia Computing, Communications and Applications (ACM TOMM)
          Expert Systems with Applications
          Pattern Recognition
          Neurocomputing
          Medical Image Analysis (MIA)
          Applied Soft Computing

  • Workshop and Challenge Organizer:
          "Vision and Learning in Embodied Intelligence" workshop at VALSE, 2024, Chongqing, China
          "Autonomous Driving Based on Large-scale Models" workshop at VALSE, 2023, Wuxi, China
          "Abdominal Multi-Organ Segmentation Challenge" challenge at MICCAI, 2022, Singapore
          "Deep Learning for Medical Big Data Analysis" workshop at VALSE, 2022, Tianjin, China
          "Deep Model Architecture" workshop at VALSE, 2021, Hangzhou, China
          "Computer Vision for Fashion, Art, and Design" workshop at CVPR, 2020, Virtual
          "Computer Vision for Fashion, Art, and Design" workshop at ICCV 2019, Seoul, Korea

  • PUBLICATION


    Preprint

    (* indicates corresponding author)

      Avatar

      MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simulated-World Control
      Enshen Zhou, Yiran Qin, Zhenfei Yin, Yuzhou Huang, Ruimao Zhang*, Lu Sheng*, Yu Qiao, Jing Shao
      arXiv preprint arXiv:2403.12037 (2024) 【PDF
      ( We employ a Chain-of-Imagination (CoI) mechanism to envision the step-by-step process of executing instructions and translating imaginations into more precise visual prompts tailored to the current state )


      Avatar

      UniPose: Detecting Any Keypoints
      Jie Yang, Ailing Zeng*, Ruimao Zhang*, Lei Zhang
      arXiv preprint arXiv:2310.08530 (2023) 【PDF
      ( UniPose is proposed to detect keypoints of any articulated, rigid, and soft objects via visual or textual prompts for fine-grained vision understanding and manipulation. )


      Avatar

      HumanTOMATO: Text-aligned Whole-body Motion Generation
      Shunlin Lu, Ling-Hao Chen, Ailing Zeng, Jing Lin, Ruimao Zhang*, Lei Zhang, Heung-Yeung Shum*
      arXiv preprint arXiv:2310.12978 (2023) 【PDF
      ( A novel text-aligned whole-body motion generation framework that can generate high-quality, diverse, and coherent facial expressions, hand gestures, and body motions simultaneously. )


    Newly Accepted Articles

    (* indicates corresponding author)

    1. Jie Yang, Bingliang Li, Ailing Zeng, Lei Zhang, Ruimao Zhang*, "Open-World Human-Object Interaction Detection via Multi-modal Prompts“, Proc. of IEEE International Conference on Computer Vision and Pattern Recognition ( CVPR ), 2024 ( A powerful Multi-modal Prompt-based HOI detector designed to leverage both textual descriptions for open-set generalization and visual exemplars for handling high ambiguity in descriptions, realizing HOI detection in the open world. ) 【PDF

    2. Jiong Wang, Fengyu Yang, Wenbo Gou, Bingliang Li, Danqi Yan, Ailing Zeng, Yijun Gao, Junle Wang, Ruimao Zhang* "FreeMan: Towards Benchmarking 3D Human Pose Estimation under Real-World Conditions “, Proc. of IEEE International Conference on Computer Vision and Pattern Recognition ( CVPR ), 2024 ( FreeMan is a newly proposed 3D Human Pose Estimation benchmark captured by synchronizing 8 smartphones across diverse scenarios. It comprises 11M frames from 8000 sequences, viewed from different perspectives. ) 【PDF】【Project

    3. Bohao Li, Yuying Ge, Yixiao Ge*, Guangzhi Wang, Rui Wang, Ruimao Zhang*, Ying Shan, "SEED-Bench-2: Benchmarking Multimodal Large Language Models“, Proc. of IEEE International Conference on Computer Vision and Pattern Recognition ( CVPR ), 2024 ( SEED-Bench-2 is launched, with 24K MCQs and 27 evaluation dimensions! Not limited to image/video QAs, it can now benchmark MLLMs on reasoning interleaved image-text data! ) 【PDF】 【Hugging Face

    4. Yiran Qin, Enshen Zhou, Qichang Liu, Zhenfei Yin, Lu Sheng*, Ruimao Zhang*, Yu Qiao, Jing Shao, "MP5: A Multi-modal Open-ended Embodied System in Minecraft via Active Perception“, Proc. of IEEE International Conference on Computer Vision and Pattern Recognition ( CVPR ), 2024 ( MP5 is an open-ended multimodal embodied system that can conduct situation-aware plans and perform embodied action control via active perception scheme. ) 【PDF】 【Project】 【Youtube】【Bilibili

    5. Yuzhou Huang, Liangbin Xie, Xintao Wang*, Ziyang Yuan, Xiaodong Cun, Yixiao Ge, Jiantao Zhou, Chao Dong, Rui Huang, Ruimao Zhang*, Ying Shan, "SmartEdit: Exploring Complex Instruction-based Image Editing with Multimodal Large Language Models“, Proc. of IEEE International Conference on Computer Vision and Pattern Recognition ( CVPR ), 2024 ( A novel framework exploring complex instruction reasoning of multi-modal large language model for smart image editing. ) 【PDF】 【Project

    6. Chaoqun Wang, Yiran Qin, Zijian Kang, Ningning Ma, Ruimao Zhang*, "Toward Accurate Camera-based 3D Object Detection via Cascade Depth Estimation and Calibration“, Proc. of IEEE International Conference on Robotics and Automation ( ICRA ), 2024 ( A novel cascade framework consisting of two depth-aware learning paradigms termed Depth Estimation and Depth Calibration. ) 【PDF

    Recent Selected Publications ( See Full List )

    (* indicates corresponding author)

      Avatar

      Open-World Human-Object Interaction Detection via Multi-modal Prompts
      Jie Yang, Bingliang Li, Ailing Zeng, Lei Zhang, Ruimao Zhang*
      Proc. of IEEE International Conference on Computer Vision and Pattern Recognition ( CVPR ), 2024 【PDF
      ( A novel prompt-based HOI detector designed to leverage both textual descriptions for open-set generalization and visual exemplars for handling high ambiguity in descriptions. )



      Avatar

      MP5: A Multi-modal Open-ended Embodied System in Minecraft via Active Perception
      Yiran Qin, Enshen Zhou, Qichang Liu, Zhenfei Yin, Lu Sheng*, Ruimao Zhang*, Yu Qiao, Jing Shao
      Proc. of IEEE International Conference on Computer Vision and Pattern Recognition ( CVPR ), 2024 【PDF】 【Project】 【Youtube】【Bilibili
      ( MP5 is an open-ended multimodal embodied system that can conduct situation-aware plans and perform embodied action control via active perception scheme. )



      Avatar

      FreeMan: Towards Benchmarking 3D Human Pose Estimation under Real-World Conditions
      Jiong Wang, Fengyu Yang, Wenbo Gou, Bingliang Li, Danqi Yan, Ailing Zeng, Yijun Gao, Junle Wang, Ruimao Zhang*
      Proc. of IEEE International Conference on Computer Vision and Pattern Recognition ( CVPR ), 2024 【PDF】【Project



      Avatar

      Toward Accurate Camera-based 3D Object Detection via Cascade Depth Estimation and Calibration
      Chaoqun Wang, Yiran Qin, Zijian Kang, Ningning Ma, Ruimao Zhang*
      Proc. of IEEE International Conference on Robotics and Automation ( ICRA ), 2024
      PDF



      Avatar

      Enhancing Human-AI Collaboration Through Logic-Guided Reasoning
      Chengzhi Cao, Yinghao Fu, Sheng Xu, Ruimao Zhang, Shuang Li,
      Proc. of International Conference on Learning Representations ( ICLR ), 2024
      PDF



      Avatar

      Neural Interactive Keypoint Detection
      Jie Yang, Ailing Zeng*, Feng Li, Shilong Liu, Ruimao Zhang*, Lei Zhang
      Proc. of IEEE International Conference on Computer Vision ( ICCV ), 2023
      PDF】 【Code



      Avatar

      SupFusion: Supervised LiDAR-Camera Fusion for 3D Object Detection
      Yiran Qin, Chaoqun Wang, Zijian Kang, Ningning Ma, Zhen Li, Ruimao Zhang*
      Proc. of IEEE International Conference on Computer Vision ( ICCV ), 2023
      PDF】 【Code



      Avatar

      Semantic Human Parsing via Scalable Semantic Transfer over Multiple Label Domains
      Jie Yang, Chaoqun Wang, Zhen Li, Junle Wang, Ruimao Zhang*
      Proc. of IEEE International Conference on Computer Vision and Pattern Recognition ( CVPR ), 2023 【PDF】【Code



      Avatar

      Inherent Consistent Learning for Accurate Semi-supervised Medical Image Segmentation
      Ye Zhu, Jie Yang, Siqi Liu, Ruimao Zhang*
      Proc. of Conference on Medical Imaging with Deep Learning( MIDL ), 2023 ( Oral )
      PDF】【Code



      Avatar

      Explicit Box Detection Unifies End-to-End Multi-Person Pose Estimation
      Jie Yang, Ailing Zeng*, Shilong Liu, Feng Li, Ruimao Zhang*, Lei Zhang
      Proc. of International Conference on Learning Representations( ICLR ), 2023
      PDF】【Code



      Avatar

      AMOS: A Large-Scale Abdominal Multi-Organ Benchmark for Versatile Medical Image Segmentation
      Yuanfeng Ji, Haotian Bai, Jie Yang, Chongjian Ge, Ye Zhu, Ruimao Zhang*, et al.
      Proc. of Conference on Neural Information Processing Systems ( NeurIPS ), 2022 ( Oral )
      PDF】【AMOS Challenge



      Avatar

      Weakly Supervised Object Localization via Transformer with Implicit Spatial Calibration
      Haotian Bai, Ruimao Zhang*, Jiong Wang, Xiang Wan
      Proc. of Europe Conference on Computer Vision( ECCV ), 2022
      PDF】 【Code】 【Youtube



      Avatar

      Switchable Normalization for Learning-to-Normalize Deep Representation
      Ping Luo, Ruimao Zhang*, Jiamin Ren, Zhanglin Peng, Jingyu Li
      IEEE Transactions on Pattern Analysis and Machine Intelligence ( T-PAMI ), 43(2):712-728, 2021
      PDF】 【Code



      Avatar

      Exemplar Normalization for Learning Deep Representation
      Ruimao Zhang, Zhanglin Peng, Lingyun Wu, Zhen Li, Ping Luo
      Proc. of IEEE International Conference on Computer Vision and Pattern Recognition ( CVPR ), 2020 【PDF】 【Supp



      Avatar

      Hierarchical Scene Parsing by Weakly Supervised Learning with Image Descriptions
      Ruimao Zhang, Liang Lin, Guangrun Wang, Meng Wang, Wangmeng Zuo
      IEEE Transactions on Pattern Analysis and Machine Intelligence ( T-PAMI ), 41(3):596 - 610, 2019
      PDF



      Avatar

      SCAN: Self-and-Collaborative Attention Network for Video Person Re-identification
      Ruimao Zhang, Jingyu Li, Hongbin Sun, Yuying Ge, Ping Luo, Xiaogang Wang, Liang Lin
      IEEE Transactions on Image Processing ( T-IP ), 28(10):4870-4882, 2019
      PDF】【Code



    MEMBER


    Ph.D. Students

    Person
    Chaoqun Wang

    Ph.D., since 2021, co-supervised with Prof. Tianwei Yu

    Scene Understanding, Video Analysis, Multimodal Learning

    M.S.: Nanjing Univ. of Sci. & Tech.

    B.S.: Huazhong Univ. of Sci. & Tech.

    Person
    Yiran Qin

    Ph.D., since 2021, co-supervised with Prof. Zhen Li

    Scene Understanding, Embodied AI, Large Visual Language Model

    M.S.: not applicable

    B.E.: Shandong University (Top 10%)

    Person
    Jie Yang

    Ph.D., since 2021, co-supervised with Prof. Zhen Li

    Human Centric Visual Perception and Generation

    M.S.: not applicable

    B.E.: Harbin Engineering Univ. (Top 1%)

    Person
    Yuzhou Huang

    Ph.D., since 2022, co-supervised with Prof. Rui Huang

    Multi-modal Large-scale Language Model

    M.S.: not applicable

    B.E.: Univ. of Elec. Sci. and Tech. of China

    Person
    Shunlin Lu

    Ph.D., since 2023, co-supervised with Prof. Benyou Wang

    Multi-modal Learning, Human Centric Understanding

    M.S.: University of Southern California

    B.E.: Wuhan University of Technology

    Person
    Bohao Li (Merit Scholarship)

    Ph.D., since 2023, co-supervised with Prof. Shuang Li

    Multi-modal Learning, Visual Understanding and Reasoning

    M.S.: Univ. of Chinese Academy of Sci.

    B.E.: Wuhan University (Top 5%)

    MPhil Students

    Person
    Bingliang Li

    Master, since 2022, master student in School of Data Science

    Scene Understanding, Referring Image Segmentation

    Person
    Fengyu Yang

    Master, since 2022, master student in School of Data Science

    Human Centric 3D Perception, Synthesis and Animation

    Alumni

  • Jiong Wang ( B.S., The Chinese University of Hong Kong ), Mphil Student, Sept. 2021 ~ Jun. 2023, Human-centric Visual Analysis
          Current Position: Ph.D. student, Fudan University, Shanghai, China.

  • Jiayu Chang ( B.S., Central South University ), Research Assistant, Jun. 2023 ~ Dec. 2023, Medical Image Analysis
          Current Position: Master student, Stanford University, CA, U.S.

  • Xixuan Hao ( M.S., The University of Hong Kong ), Research Assistant, Dec. 2022 ~ Jul. 2023, Medical Image Analysis
          Current Position: Ph.D. student, The Hong Kong University of Science and Technology, Guangzhou (HKUST-GZ), China.

  • Siyue Yao ( M.S., King’s College London ), Research Assistant, Jul. 2022 ~ Apr. 2023, Human Centric Visual Generation
          Current Position: Ph.D. student, Xi'an Jiaotong-Liverpool University (XJTLU), China.

  • Ye Zhu ( B.S., South China Agricultural University ), Research Assistant, Oct. 2021 ~ Jun. 2023, Medical Image Analysis, Transformer
          Current Position: Ph.D. student, Hong Kong Baptist University (HKBU), Hong Kong, China.

  • Ziyi Tang ( M.S., University of Southampton ), Research Assistant, Jul. 2021 ~ Jul. 2022, Cross-modal Learning, Transformer
          Current Position: Ph.D. student, Sun Yat-sen University (SYSU), China.

  • Haotian Bai ( B.E., Shanghai University ), Research Assistant, Jul. 2021 ~ Apr. 2022, Transformer Architecture
          Current Position: Ph.D. student, The Hong Kong University of Science and Technology, Guangzhou (HKUST-GZ), China.

  • Hao Zhang ( M.S., University of Southern California ), Research Assistant, Jul. 2021 ~ Feb. 2022, Large-scale Pretraining
          Current Position: Ph.D. student, University of Illinois Urbana-Champaign (UIUC), U.S.

  • Huijie Wang ( M.S., Technische Universität München ), Visiting Student, Jul. 2020 ~ Feb. 2021, Medical Image Analysis
          Current Position: Researcher, Shanghai Artificial Intelligence Laboratory, China.

  • TEACHING


  • DDA4220: Deep Learning and Applications. Spring 2023
          Instructor, The Chinese University of Hong Kong, Shenzhen.

  • MDS5102: Python Programming. Fall 2021
          Instructor, The Chinese University of Hong Kong, Shenzhen.

  • CSC1001: Introduction to Programming: Python. Fall 2021
          Instructor, The Chinese University of Hong Kong, Shenzhen.

  • CSC1001: Introduction to Programming: Python. Spring 2021
          Instructor, The Chinese University of Hong Kong, Shenzhen.

  • Computer Vision. Fall 2013
          Taught by Prof. Liang Lin , @Sun Yat-sen University.
          Teaching Assistant

  • Lineare Algebra. Fall 2012
          Taught by Prof. Weishi Zheng , @Sun Yat-sen University.
          2+2 International Undergraduate Program, all in English.
          Teaching Assistant, Sun Yat-sen University.

  • Data Structure. Fall 2011
          Taught by Prof. Liang Lin , @Sun Yat-sen University.
          Teaching Assistant

  • Modern Computer Vision. Summer 2011
          Taught by Prof. Alan L. Yuille from UCLA, @Sun Yat-sen University.
          Summer Intensive Course, all in English.
          Teaching Assistant

  • CONTRACT ME


    Address: Room 517, Daoyuan Building, The Chinese Univeristy of Hong Kong, Shenzhen

    E-mail: ruimao.zhang@ieee.org or zhangruimao@cuhk.edu.cn

    Phone: (0755)23517042