The ability for automated technologies to correctly identify a human’s actions provides considerable scope for systems that make use of human-machine interaction. Thus, automatic3D Human Action Recognition is an area that has seen significant research effort. In work described here, a human’s everyday 3D actions recorded in the NTU RGB+D dataset are identified using a novel structured-tree neural network. The nodes of the tree represent the skeleton joints, with the spine joint being represented by the root. The connection between a child node and its parent is known as the incoming edge while the reciprocal connection is known as the outgoing edge. The uses of tree structure lead to a system that intuitively maps to human movements. The classifier uses the change in displacement of joints and change in the angles between incoming and outgoing edges as features for classification of the actions performed
“50 years of object recognition: Directions forward.” Computer vision and image understanding 117, no. 8 (2013): 827-891. https://doi.org/10.1016/j.cviu.2013.04.005
Shafaei, Alireza, and James J. Little. “Real-time human motion capture with multiple depth cameras.” In 2016 13th Conference on Computer and Robot Vision (CRV), pp. 24-31. IEEE, 2016. https://doi.org/10.1109/CRV.2016.25
Shahroudy, Amir, Jun Liu, Tian-Tsong Ng, and Gang Wang. “Nturgb+ d: A large scale dataset for 3D human activity analysis.” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1010-1019. 2016. https://ieeexplore.ieee.org/document/7780484/
Liu, Jun, Amir Shahroudy, Mauricio Lisboa Perez, Gang Wang, Ling-Yu Duan, and Alex KotChichung. “Nturgb+ d 120: A large-scale benchmark for 3D human activity understanding.” IEEE transactions on pattern analysis and machine intelligence, 2019. https://ieeexplore.ieee.org/abstract/document/8713892/
Rusu, Radu Bogdan, Jan Bandouch, Zoltan Csaba Marton, Nico Blodow, and Michael Beetz. “Action recognition in intelligent environments using point cloud features extracted from silhouette sequences.” In RO-MAN 2008-The 17th IEEE International Symposium on Robot and Human Interactive Communication, pp. 267-272. IEEE, 2008. https://ieeexplore.ieee.org/document/8713892
Li, Meng, and Howard Leung. “Graph-based approach for 3D human skeletal action recognition.” Pattern Recognition Letters 87 195-202, 2017. https://doi.org/10.1016/j.patrec.2016.07.021
Yang, Xiaodong, and Ying Li Tian. “Effective 3D action recognition using EigenJoints.” Journal of Visual Communication and Image Representation 25, no. 1 (2014): 2-11. https://doi.org/10.1016/j.jvcir.2013.03.001
Munaro, Matteo, GioiaBallin, Stefano Michieletto, and Emanuele Menegatti. “3D flow estimation for human action recognition from coloured point clouds.” Biologically Inspired Cognitive Architectures 5: 42-51, 2013. https://doi.org/10.1016/j.bica.2013.05.008
Wu, Qingqiang, Guanghua Xu, Longting Chen, Ailing Luo, and Sicong Zhang. “Human action recognition based on kinematic similarity in real-time.” PloS one 12, no. 10, 2017. https://doi.org/10.1371/journal.pone.0185719
Shi, Lei, Yifan Zhang, Jian Cheng, and Hanqing Lu. “Skeleton-based action recognition with directed graph neural networks.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7912-7921. 2019. http://openaccess.thecvf.com/content_CVPR_2019/html/Shi_SkeletonBased_Action_Recognition_With_Directed_Graph_Neural_Networks_CVPR_2019_paper.html
Rusu, Radu Bogdan, and Steve Cousins. “3D is here: Point cloud library (PCL).” In 2011 IEEE international conference on robotics and automation, pp. 1-4. IEEE, 2011. https://pointclouds.org/assets/pdf/pcl_icra2011.pdf
Yang, Zhengyuan, Yuncheng Li, Jianchao Yang, and Jiebo Luo. “Action recognition with spatio–temporal visual attention on skeleton image sequences.” IEEE Transactions on Circuits and Systems for Video Technology 29, no. 8: 2405-2415, 2018. https://ieeexplore.ieee.org/document/8428616/.
This work is licensed under a Creative Commons Attribution 4.0 International License.
The names and email addresses entered in this journal site will be used exclusively for the stated purposes of this journal and will not be made available for any other purpose or to any other party.
Submission of the manuscript represents that the manuscript has not been published previously and is not considered for publication elsewhere.