University of Electronic Science and Technology of China
(loading 85K)

YANLI JI(姬艳丽)

Associate professor

Center for Future Media

School of Automation Engineering

University of Electronic Science and Technology of China

Email: yanliji@uestc.edu.cn.

Office: B1-417, Main Building, 2006 Xiyuan Avenue,
Gaoxin West Zone, Chengdu, China.



SHORT INSTRUCTION

Yanli Ji, Ph.D, is currently an associate professor in the University of Electronic Science and Technology of China (UESTC). She obtained her Ph.D degree from Department of Advanced Information Technology, Kyushu University, Japan at Sep. 2012. Her research interests include multiple directions around Human Robot Interaction for social robots, e.g. human activity recognition, emotion analysis, hand gesture recognition and eye gazing estimation, etc.

She paid attention on researches of human action analysis and human centroid HRI, has published papers in Signal processing, Journal of Vision Computing and Image Representation, Multimedia Tools and Applications, IEEJ Transections on Electrical and Electronic Engineering, ACCV, ICME, ICONIP, etc. She has obtained ‘IEEE Region 10 WIE Best Paper Award’. Moreover, she has applied for 14 invention patents in domestic. She has directed and undertaken more than 10 research projects from natural science foundation of China (NSFC), and horizontal cooperation projects from company. Yanli Ji is active in professional services. She was the Registration Chair of ICME 2014, the Conference Secretary of VALSE 2015, and PC Chair of ACM SIGAI CHINA symposium in ACM Turing 50th Celebration Conference, 2017. She is reviewer for Journal of Human Computer Interaction, Journal of Visual Communication and Image Representation, Neurocomputing, etc. She is IEEE, ACM and CCF member, SIGAI, CCF, CAAI committee member. She is VALSE VOOC Committee Member, etc.



RESEARCH INTERESTS

  • Human Action/Activity Analysis
  • Human Robot Interaction: Hand gesture, Emotion analysis and Eye gazing estimation


  • PROFESIONAL EXPERIENCE

    Associate professor Aug. 2016 – Now
    Center for Robotics, School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, China.
    Lecturer Jan. 2013 – July 2016
    Center for Robotics, School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, China.
    Visit Researcher Nov. 2015 -Jan. 2016
    Lab of Image/Media Understanding, Department of Advanced information Technology, Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.
    Research Assistant Jan. 2012 -Aug. 2012
    Department of Advanced information Technology,Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan.


    EDUCATION

    Kyushu University, Fukuoka, Japan Oct. 2009 – Sep. 2012
    Ph.D, Department of Advanced Information Technology
    Thesis: Recognition of Human Actions using Visual Local Features
    Advisor: Prof. Rin-ichiro Taniguchi.
    Advisor Panel: Prof. Ryo Kurazume, Associate Prof. Hajime Nagahara.
    Chongqing University, Chongqing, China Sep. 2005 – July 2008
    Master, Image Processing, College of Communication Engineering
    Thesis: Image Compression based on Optical Wavelet Transform.
    Advisor: Prof. Fengchun Tian.
    Chongqing University, Chongqing, China Sep. 2001 – July 2005
    Bachelor, College of Communication Engineering.


    PROJECTS

    2017.01-2020.12 Natural Science Funding of China (NSFC), Multi-mode Learning on Moving-view Human Activity Recognition and Human-robot Interactions for Service Robot Application. Principal Investigator. (面上项目,No. 61673088)

    2016.08-2017.08 Fundamental Research Funds for the Central Universities, Study on visual based emotion cognition and interaction for social robots, Principal Investigator.

    2014.01-2016.12 Natural Science Funding of China for Young Scientists (NSFC), The Spatio-Temporal Co-Occurrence Model Based Human Interaction Recognition in Group Activity Understanding. Principal Investigator. (青年基金,No. 61305043)

    2015.08-2016.08 Ricoh Software Research Center (SRCB), Video Analysis using Deep Learning. Co-investigator.

    2014.01-2015.12 Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry, STCOModel based Human Group Interaction Understanding. Principal Investigator. (留学归国基金)

    2014.01-2016.01 Fundamental Research Funds for the Central Universities, Semantic understanding of multi-person interaction using spatio-temporal Context features, Principal Investigator.

    2012.10-2013.06 Huawei Technologies Co. Ltd, Research on freedom head motion based eye gazing system. Principal Investigator.

    2012.09-2013.09 Huawei Technologies Co. Ltd, Hand gesture used HCI in conference system. Co-investigator.


    TEACHING

  • Digital Signal Processing, H0700330.02, Semester 2, 2013-2014.
  • Digital Signal Processing, H0700330.03, Semester 2, 2014-2015.
  • Pattern Recognition, 0718420.01, Semester 2, 2014-2015.
  • Digital Image Processing, H0720630.02, Semester 2, 2015-2016.
  • Signal and System, Semester 1, 2016-2017.


  • Professional Services

  • Registration Chair, ICME (IEEE International Conference on Multimedia & Expo), Chengdu, China, July, 2014.
  • Conference secretary, Vision And Learning Seminar (VALSE), Chengdu, China, May, 2015.
  • Program Chair, ACM SIGAI CHINA symposium in ACM Turing 50th Celebration Conference, 2017
  • Reviewer for Journal of Human Computer Interaction, Journal of Visual Communication and Image Representation, Neurocomputing, etc.
  • IEEE, ACM member, SIGAI, CCF, CAAI committee member.
  • VALSE Online Program Committee Member, http://www.valseonline.org/


  • PUBLICATIONS

    1. Yanli Ji, Yang Yang, Xing Xu, HengTao Shen. One-shot Learning based Pattern Transition Map for Action Early Recognition Signal Processing, 2017.

    2. Yanli Ji, Haoxin Li, Yang Yang, Shuying Li. Hierarchical topology based hand pose estimation from a single depth image. Multimedia Tools and Applications, 1-16, 2017, DOI: 10.1007/s11042-017-4651-8.

    3. Yanli Ji, Jiaming Li, Hong Cheng, Xing Xu, Jingkuan Song. Multi-cue Information Fusion for Two-layer Activity Recognition. ACCV workshop HIS, 2016.

    4. Yanli Ji, Hong Cheng, Yali Zheng, Haoxin Li. Learning contrastive feature distribution model for interaction recognition. Journal of Visual Communication and Image Representation, 33, pp. 340–349, 2015.

    5. Dekun Hu, Binghao Meng, Shengyi Fan, Hong Cheng, Lu Yang, Yanli Ji, Real-Time Understanding of Abnormal Crowd Behavior on Social Robots, 16th Pacific-Rim Conference on Multimedia (PCM), 2015

    6. Lu Yang, Hong Cheng, Jiasheng Hao, Yanli Ji, Yiqun Kuang, A Survey on Media Interaction in Social Robotics, 16th Pacific-Rim Conference on Multimedia (PCM), 2015. (Oral Paper)

    7. Yanli Ji, Guo Ye, Hong Cheng. Interactive body part contrast mining for human interaction recognition, Proc. in ICME Workshop Hot3D, 2014.

    8. Hong Cheng, Haoyang Zhuang, Yanli Ji, 3D medial axis distance for hand detection, Proc. in ICME Workshop Hot3D, 2014.

    9. Yanli Ji, Atsushi Shimada, Hajime Nagahara and Rin-ichiro Taniguchi. Contribution estimation of participants for human interaction recognition. IEEJ Transactions on Electrical and Electronic Engineering. 8(3), 2013.

    10. Yanli Ji, Yoshiyasu Ko, Atsushi Shimada, Hajime Nagahara and Rin-ichiro Taniguchi. Cooking Gesture Recognition using Local Feature and Depth Image. Proc. of ACMMM in workshop CEA 2012, 2012, 11.

    11. Yanli Ji, Atsushi Shimada, Hajime Nagahara and Rin-ichiro Taniguchi. SOM-based Human Action Recognition Using Local Feature Descriptor CHOG3D. Research Reports on Information Science and Electrical Engineering of Kyushu University, 17(1), 2012.

    12. Bin Tong, Weifeng Jia, Yanli Ji, Einoshin Suzuki: Linear Semi-Supervised Dimensionality Reduction with Pairwise Constraint for Multiple Subclasses. IEICE TRANSACTIONS on Information and Systems, E95-D(3), pp.812-820, 2012.

    13. Yanli Ji, Atsushi Shimada, Hajime Nagahara and Rin-ichiro Taniguchi. Human-human interaction recognition by estimating the action contribution of participants. Proceedings of the 18th Korea-Japan Joint Workshop on Frontiers of Computer Vision, Kawasaki, Japan. 2012, 2.

    14. Yanli Ji, Atsushi Shimada, Hajime Nagahara and Rin-ichiro Taniguchi. A Compact Descriptor CHOG3D and its Application in Human Action Recognition. IEEJ Transactions on Electrical and Electronic Engineering. 8(1), 2013.

    15. Xin Xu, Fengchun Tian, Yanli Ji, Jianwen Song. 4f Coherent Optic System Denoise Method Based on Fusion of Multiple Spatial Frequency Spectrum Images. Opto-Electronic Engineering, 2011(5) (Chinese).

    16. Yanli Ji, Atsushi Shimada, Rin-ichiro Taniguchi. Human Action Recognition by SOM considering the Probability of Spatio-temporal Features, Proceedings of the 17th international conference on Neural information processing, pp.391-398, 2010, 11.

    17. Yanli Ji, Atsushi Shimada, Rin-ichiro Taniguchi. A Compact 3D Descriptor in ROI for Human Action Recognition, IEEE TENCON 2010, 2010, 11. (WIE Best Paper Award).

    SELECTED PATENT

    1. A Method for Estimating Hand Poses on Depth Images using Correction Processing (一种基于深度信息和校正方式的手部姿态估计方法), China: 201610321710.3 [P].

    2. A 3D Eye Gazing Estimation Method and Its Application in a Long-short Distance HRI (一种基于3D视线估计的远近距离人机交互系统与方法), China: 201610133124.6 [P].

    3. A Method for Global Hand Pose Detection in Depth Data (一种基于深度数据的手部全局姿态检测方法), China: 201610093720.6 [P].

    4. Multi-posed Fingertip Tracking and Its Application in Natural HRI (用于自然人机交互的多姿态指尖跟踪方法), China: 201610070474.2 [P].

    5. A Human Computer Interaction Method Based on Eye Gaze Tracking (基于眼动跟踪的人机交互方法), China: CN103677270A [P].

    6. A 3D Eye Gazing Estimation Method using Eye Landmarks(基于眼部关键点检测的3D视线方向估计方法),201611018884.9.

    7. STDW based continue Hand Gesture trajectory Recognition (一种基于STDW的连续字符手势轨迹识别方法),201610688950.7.

    8. Target Model based Continue Hand trajectory Segmentation Method and System (基于目标模型信息的连续手势轨迹分割方法及系统),CN201610442838.5.



    Back to top.