 Research
 Open Access
 Published:
Human motion recognition based on SVM in VR art media interaction environment
Humancentric Computing and Information Sciences volume 9, Article number: 40 (2019)
Abstract
In order to solve the problem of human motion recognition in multimedia interaction scenarios in virtual reality environment, a motion classification and recognition algorithm based on linear decision and support vector machine (SVM) is proposed. Firstly, the kernel function is introduced into the linear discriminant analysis for nonlinear projection to map the training samples into a highdimensional subspace to obtain the best classification feature vector, which effectively solves the nonlinear problem and expands the sample difference. The genetic algorithm is used to realize the parameter search optimization of SVM, which makes full use of the advantages of genetic algorithm in multidimensional space optimization. The test results show that compared with other classification recognition algorithms, the proposed method has a good classification effect on multiple performance indicators of human motion recognition and has higher recognition accuracy and better robustness.
Introduction
Today, with the rapid development of computer technology such as Internet of things (IoT), wireless communications, edge computing, and data mining [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18], various advanced multimedia technologies emerge one after another. Due to the “immersive” realism, Virtual Reality (VR) can bring a new experience to users in a more natural and realistic human–computer interaction [19,20,21]. Many kinds of multimedia applications based on VR technology have gradually become the hotspots of future cultural, art and entertainment markets, such as virtual shopping community, immersive virtual reality games, virtual landscape roaming and virtual art stage performances [22,23,24]. Among them, the multimedia human–computer interaction technology in the art scene needs to capture and recognize the human body motion in real time and accurately, in order to achieve better interaction effect and artistic sensory experience. In order to enable more natural and effective communication between people and computers, the motion recognition interactive system needs to be able to accurately identify various complex and varied human actions. As shown in Fig. 1, in the digital performance, to digitally preview the dance, first capture the action of the stage dancers. Then, as shown in Fig. 2, the dance behavior after the capture is digitally recognized and presented. Figure 3 shows the interaction of the identified actions in the VR scenario.
In the process of digital performance, body language can often express the true feelings of actors compared with natural language. Therefore, in the virtual environment, the accurate recognition of human–computer interaction is especially important. At this stage, mainstream human motion recognition methods mainly use machine vision technology, involving knowledge of advanced computer disciplines such as image processing, pattern recognition, and machine learning. Among them, the image processing method based on spatiotemporal features and the machine learning method based on representation features have higher robustness, which has become the mainstream of current research [25,26,27,28,29]. Although the computational complexity is high, the two motion recognition methods can recognize continuous motion and interaction. The research direction chosen in this paper is a machine learning based approach. For example, using the Kinect sensor, Shi et al. [27] proposed a human motion recognition method based on the skeleton characteristics of key frames. The method uses Kmeans clustering algorithm to extract key frames and two features in human motion video sequences and uses SVM classifier to classify action sequences. Qin and Li [28] proposed a realtime recognition system for portable human gestures based on DSP. It uses a combination of wavelet packet principal component analysis and Linear Discriminant Analysis (LDA). All the above methods achieve a certain degree of precision and efficiency in human motion recognition. However, the human body movements in the VR multimedia art scene are more complicated and the changes are more irregular, resulting in the motion data being massive and highdimensional (nonlinear feature information), so the spatial feature extraction needs to reduce the dimension as much as possible. Reflect various types of actions. In addition, SVM classifier parameter optimization has a space for improvement.
In view of the spatiotemporal continuity of human motion data, two newest CNN based approaches [30, 31] are proposed. They used convolutional neural networks (CNN) to solve the problem of coherent motion recognition and used convolutional neuron spatiotemporal sequences to capture the dependence between input data. However, the size of the convolution kernel limits the range of dependency captures between data samples. Therefore, typical CNN models are not suitable for multiple complex motion recognition. Murad and Pyun [32] based on Deep Recurrent Neural Networks (DRNN) to propose an algorithm for human motion classification and recognition. Although the recognition rate is high, in the training and recognition process many GPU parallel operations are mainly used. It will lead the operations have a certain delay and realtime performance is affected, especially in large digital performances. Thus, their algorithm is not suitable for used in realtime evaluation systems.
In this paper, we proposes a human motion recognition method based on LDA and SVM (named LDAGASVM), in order to improve the efficiency and accuracy of human motion recognition in VR human–computer interaction applications. This method mainly studies from two aspects: (1) Improve the recognition rate of motion features. (2) Improve the accuracy of motion classification. First, introducing a kernel function in LDA for nonlinear projection to map training samples into a highdimensional subspace, and obtaining the best classification feature vector, effectively solving the nonlinear problem and expanding the sample difference, and reducing the dimensionality of the vector space operating efficiency. Secondly, the genetic algorithm is used to realize the parameter search optimization of SVM, which makes full use of the advantages of genetic algorithm in multidimensional space optimization and improves the recognition rate. The experimental results verify the validity and accuracy of the proposed method.
In addition, during the experiment, in the VR environment, the motion data acquisition of the virtual character in human–computer interaction is mainly acquired by the inertia capture device. The process mainly uses the wearable inertial sensor to capture the main bone joint posture data of the human body, and after obtaining the motion capture data, the data file can be imported into the skeleton virtual human model to drive the virtual human model bone movement.
The rest of this paper is organized as follows. The second session introduces the use of the nuclear decision LDA algorithm to extract the effective human motion features; the third session introduces the use of genetic optimization SVM algorithm for accurate motion classification; the fourth session introduces the experimental analysis in the VR environment, for the traditional KmeansSVM algorithm and the LDAGASVM algorithm proposed in this paper are compared and analyzed in terms of accuracy, accuracy, specificity and sensitivity, and the advantages of the proposed method are obtained.
Feature extraction based on nuclear decision LDA
Linear discriminant analysis is a linear method commonly used for feature extraction. The LDA algorithm is insensitive to changes in illumination and attitude and is therefore widely used in image recognition tasks. However, algorithms such as traditional LDA [33] are basically linear.
Due to the complexity and diversity of human motion in VR scenes, some important highdimensional nonlinear feature information hidden in motion data cannot be extracted. Therefore, this paper introduces a kernel function in the LDA algorithm for nonlinear projection to extract expression features. Combined with the genetically optimized SVM classifier, the complex action classification and recognition is finally realized.
In the human motion data extraction application, let A be the action matrix. In the LDA algorithm, A is a full rank matrix with class labels:
Among them, each \( a_{i} (1 \le i \le n) \) is a data point in mdimensional space. Each block matrix \( B_{i} \in {\mathbf{R}}^{m \times n} (1 \le i \le k) \) is a collection of data items in the ith class. \( n_{i} \) is the size of class i and the total number of data items in data set \( {\mathbf{A}} \) is \( n \). Let \( N_{i} \) denote the column index belonging to class i. The global center \( c \) of \( {\mathbf{A}} \) and the local center \( c_{i} \) of each class \( {\mathbf{A}}_{i} \) are respectively expressed as follows [34]
Assume
Among them, \( S_{b} \), \( S_{w} \) and \( S_{t} \) are called interclass divergence matrix, intraclass divergence matrix and total divergence matrix, respectively.
Then, the standard LDA objective function can look like this:
It can be seen that the LDA algorithm is essentially a linear method, so the effect is not very good when dealing with nonlinear problems, and there are singularities. In order to efficiently extract the nonlinear characteristics of the data, we use the kernel decision LDA to extract features.
The basic idea is to map the original training data samples to the highdimensional feature space \( H \) by nonlinear transformation, and then perform linear decision analysis in \( H \). Suppose the nonlinear mapping \( \phi (X) \) maps \( X \) to the highdimensional feature space \( H \), yielding \( \phi (X) = \{ \phi (x_{1}^{1} ), \ldots ,\phi (x_{i}^{j} ), \ldots ,\phi (x_{c}^{{N_{c} }} )\} \), where \( \phi (x_{i}^{j} ) \in H \) represents the \( x_{i}^{j} \) sample vector in \( H \). Set the kernel matrix to \( {\mathbf{K}} = \phi (X)^{T} \phi (X) = [k_{1}^{1} , \ldots ,k_{i}^{j} , \ldots ,k_{c}^{{N_{c} }} ] \), where \( k_{i}^{j} = \phi (X)^{T} \phi (x_{i}^{j} ) \). and the Fisher criterion function in \( H \) is [34]:
Its summary \( w \) is the kernel space projection vector.
where \( u_{i} \) is the average of the ith samples in \( H \), \( u \) is the total average, and \( {\mathbf{S}}_{w}^{\phi } \) is the intraclass scatter matrix. \( w \) can be expressed as:
where A = X. Then formula (8) can be expressed as:
Among them, \( {\mathbf{K}}_{t} \) represents the overall scatter matrix of the kernel, and \( {\mathbf{K}}_{b} \) represents the scatter matrix between kernel classes, calculated as follows [35]:
where \( {\mathbf{K}}_{w} \) is a kernel class scatter matrix. Let \( {\mathbf{A}}_{\text{opt}} \) denote the feature vector of a set of optimal solutions that maximize Eq. (13). From Eq. (11) we can get the kernel space projection matrix:
For any sample point \( x \), its projection in kernel space is given by:
Proposed human motion recognition method
Motion data collection
Different from the image processing method based on spatiotemporal features, the machine learning method based on representation features used in this paper requires motion data acquisition tools with faster transmission speed and higher precision. Therefore, in the multimedia interaction scenario in the virtual reality environment, the Microsoft Corporation Kinect sensor used in the market cannot meet the accuracy requirements. Therefore, a motion data acquisition device based on an inertial sensor is employed. The specific digital performance process, in the VR interactive environment, the wearable hardware devices required for motion acquisition are shown in Fig. 4, and the hardware parameters are shown in Table 1.
Motion data classification based on genetic optimization SVM
The SVM [36] parameter optimization search based on Gaussian radial kernel function is mainly analyzed. Since different penalty factor parameters \( C \) and kernel function parameters \( \sigma \) are selected, different performance SVMs will be obtained. Therefore, this genetic algorithm is used to optimize the above two parameters. Crossproduct coding in genetic algorithm is based on floatingpoint coding [37]:
where \( a \) represents a random number with a range of (0, 1).
Use the uniform mutation operator to perform the mutation operation, and select a random value from the specified interval of the relevant gene value to update the original gene value for all mutation points:
where \( r \) is a random number with a range of (0, 1), U_{max} is the upper limit of the gene position, and U_{min} is the lower limit of the gene position [27]. The fitness function is:
where \( E \) represents the sum of squared errors and \( b \) represents a constant.The main idea of the improved SVM is to optimize the penalty factor parameter \( C \) and the kernel function parameter \( \sigma \) of the SVM through a genetic algorithm.
Human motion recognition realization
The main steps proposed to realize human motion recognition are shown in Fig. 5. The main part of the prestep process is to search for the optimal parameters required by the SVM, mainly using the global search capability of the genetic algorithm, thereby improving the SVM classification performance. The specific steps are as follows:
Step 1. Collect human motion data.
Step 2. Perform kernel matrix feature extraction based on LDA algorithm.
Step 3. Search for SVM parameters according to the genetic algorithm and determine whether it is optimal.
Step 4. If the parameter is the optimal parameter, the search is completed and recorded. If the nonoptimal parameters continue to search.
Step 5. Classify based on the optimized SVM classifier and output the classification result.
Experimental analysis and comparison in VR environment
Experimental environment
The experimental data is divided into realtime motion acquisition data based on inertial sensors, which is 20G in total. The experimental data set contains 10 types of actions, and the complexity increases in turn. The system structure of the VR multimedia art scene is shown in Fig. 6. The hardware and software parameters of the experimental environment are shown in Table 2. The relevant parameters of the test algorithm are: population size is 50, maximum iteration algebra is 30, crossover probability is 0.8, mutation probability is 0.007, b = 1000, α = 0.5, r = 0.2.
Evaluation indicators
In order to quantify the performance of the proposed method, the four most commonly used evaluation indicators in the action classification field are selected [38,39,40]: Precision, Accuracy, Specificity and Sensitivity, the calculation of the four is as follows:
where \( TP \) represents the number of positive samples correctly classified, TN represents the number of negative samples correctly classified, \( FP \) represents the number of positive samples of the wrong classification, and \( FN \) represents the number of negative samples of the incorrect classification (Table 3).
Experimental results
In the experiment, using the recognition test data, 10 dance motion types are obtained, as shown in Fig. 3. The recognition performance results of the 10 types of dance motion are shown in Table 4. The LDAGASVM algorithm proposed in this paper is compared with the KmeansSVM algorithm [27]. It can be seen from Table 4 that the proposed algorithm increases the average of the Precision and Accuracy indicators by 4.401% and 4.903%, respectively. From the comparison chart of Figs. 7 and 8, the LDAGASVM algorithm results. The Precision and Accuracy indicators of each test point are higher than the KmeansSVM algorithm and are relatively smooth and stable. That is to say, the LDAGASVM algorithm proposed in this paper shows excellent performance in 10 motion type recognition. This is because the adopted genetic algorithm has certain advantages in multidimensional space optimization and has a good global search ability. In addition, the proposed algorithm achieves a more balanced result on both the specificity and Sensitivity. The specificity and Sensitivity mean values of the two algorithms are 90.833%, 92.128%, 92.78%, and 94.006%, respectively. From the comparison in Fig. 10, it can be seen that the Sensitivity index curves of the two algorithms are gradually separated over time, and It can be seen from Figs. 9 and 10 that the index values of the LDAGASVM algorithm are higher than the KmeansSVM algorithm, that is, the sensitivity of the LDAGASVM algorithm is higher. This is due to the use of the nuclear decision LDA feature extraction to solve the nonlinear problem of the traditional LDA and expand the sample difference, so that the performance is more stable. Therefore, in summary, from the precision, accuracy, specificity and sensitivity, the LDAGASVM algorithm proposed in this paper is superior to KmeansSVM algorithm can solve the problem of motion recognition in digital performance of VR environment.
Conclusion
In this paper, we combine the kernel decision LDA algorithm with the genetic optimizationbased SVM algorithm to achieve human motion classification and recognition. In order to improve the accuracy of human motion recognition in VR human–computer interaction applications. Introducing a kernel function in LDA for nonlinear projection to map training samples into a highdimensional subspace, and obtaining the best classification feature vector, effectively solving the nonlinear problem and expanding the sample difference and reducing the dimensionality of the vector space operating efficiency. In addition, the genetic algorithm is used to optimize the parameter search of SVM. The experimental results verify the effectiveness and advancement of the proposed method. However, the realtime performance of the algorithm in sample training and testing remains to be studied, and the complexity and scalability of the proposed algorithm will be further studied.
Availability of data materials
Not applicable.
Abbreviations
 VR:

virtual reality
 LDA:

linear discriminant analysis
 SVM:

support vector machine
 GA:

genetic algorithm
 LDAGASVM:

linear discriminant analysisgenetic algorithmsupport vector machine algorithm
 KmeansSVM:

Kmeans ClusteringSupport Vector Machine Algorithm
References
 1.
Wu TY, Chen CM, Wang KH, Wu JM (2019) Security analysis and enhancement of a certificateless searchable public key encryption scheme for IIoT environments. IEEE Access 7:49232–49239
 2.
Chen CM, Wang KH, Yeh KH, Xiang B, TsuYang W (2019) Attacks and solutions on a threeparty passwordbased authenticated key exchange protocol for wireless communications. J Ambient Intell Humaniz Comput 10(8):3133–3142
 3.
Pan JS, Kong L, Sung TW, Tsai PW, Snasel V (2018) alphaFraction first strategy for hierarchical wireless sensor networks. J Internet Technol 19(6):1717–1726
 4.
Xiong H, Qin Z (2015) Revocable and scalable certificateless remote authentication protocol with anonymity for wireless body area networks. IEEE Trans Inf Forensics Secur 10(7):1442–1455
 5.
Ni L, Tian F, Ni Q, Yan Y, Zhang J (2019) An anonymous entropy based location privacy protection scheme in mobile social networks. EURASIP J Wirel Commun Netw 2019:93
 6.
Xiong H, Zhao Y, Peng L, Zhang H, Yeh KH (2019) Partially policyhidden attributebased broadcast encryption with secure delegation in edge computing. Future Gener Comput Syst 97:453–461
 7.
Chen CM, Xiang B, Liu Y, Wang KH (2019) A secure authentication protocol for internet of vehicles. IEEE Access 7(1):12047–12057
 8.
Wu TY, Chen CM, Wang KH, Meng C, Wang EK (2019) A provably secure certificateless public key encryption with keyword search. J Chin Inst Eng 42(1):20–28
 9.
Pan JS, Lee CY, Sghaier A, Zeghid M, Xie J (2019) Novel systolization of subquadratic space complexity multipliers based on toeplitz matrixvector product approach. IEEE Trans Very Large Scale Integr Syst 27(7):1614–1622
 10.
Gan W, Lin JC, FournierViger P, Chao HC, Philip SY (2019) HUOPM: highutility occupancy pattern mining. IEEE Trans Cybern. https://doi.org/10.1109/tcyb.2019.2896267
 11.
Lin JC, Zhang Y, Zhang B, FournierViger P, Djenouri Y (2019) Hiding sensitive itemsets with multiple objective optimization. Soft Comput. https://doi.org/10.1007/s00500019038293
 12.
Lin JC, FournierViger P, Wu L, Gan W, Djenouri Y, Zhang J (2018 ) PPSF: an opensource privacypreserving and security mining framework. In: IEEE international conference on data mining workshops (ICDMW), pp. 1459–1463, 17–20 Nov. 2018, Singapore
 13.
Lin JC, Yang L, FournierViger P, Hong TP (2019) Mining of skyline patterns by considering both frequent and utility constraints. Eng Appl Artif Intell 77:229–238
 14.
Wu JMT, Lin JCW, Tamrakar A (2019) Highutility itemset mining with effective pruning strategies. ACM Trans Knowl Discov Data. https://doi.org/10.1145/3363571
 15.
Zhao Z, Li C, Zhang X, Chiclana F, Viedma EH (2019) An incremental method to detect communities in dynamic evolving social networks. Knowl Based Syst 163:404–415
 16.
Wang X, Ji S, Liang Y, Leung H, Chiu DKW (2019) An unsupervised strategy for defending against multifarious reputation attacks. Appl Intell. https://doi.org/10.1007/s10489019014909
 17.
Wang J, Gu X, Liu W, Sangaiah AK, Kim HJ (2019) An empower hamilton loop based data collection algorithm with mobile agent for WSNs. Hum Centric Comput Inf Sci 9:18
 18.
Wang J, Gao Y, Wang K, Sangaiah AK, Lim SJ (2019) An affinity propagation based selfadaptive clustering method for wireless sensor networks. Sensors 19(11):2579
 19.
Austin PD, Siddall PJ (2019) Virtual reality for the treatment of neuropathic pain in people with spinal cord injuries: a scoping review. J Spinal Cord Med. https://doi.org/10.1080/10790268.2019.1575554
 20.
Huang C, Zhang Y, Zhu C, Zhang C, Meng H (2019) Chinese sports basketball teaching tactics training system combined with multimedia interactive model and virtual reality technology. Multimed Tools Appl. https://doi.org/10.1007/s1104201972989
 21.
Hagelsteen K, Johansson R, Ekelund M, Bergenfelz A, Anderberg M (2019) Performance and perception of haptic feedback in a laparoscopic 3D virtual reality simulator. Minim Invasive Ther Allied Technol. https://doi.org/10.1080/13645706.2018.1539012
 22.
Zhang Fuquan, Ding Gangyi, Lin Qing, Lin Xu, Li Zuoyong, Li Lijie (2018) Research of Simulation of Creative Stage Scene Based on the 3DGans Technology. J Inf Hiding Multimed Signal Process 9(6):1430–1443
 23.
Merchant Zahira, Goetz Ernest T, Cifuentes Lauren, KeeneyKennicutt Wendy, Davis Trina J (2014) Effectiveness of virtual realitybased instruction on students’ learning outcomes in K12 and higher education: a metaanalysis. Comput Educ 70:29–40
 24.
Zhang F, Ding G, Ma L, Zhu Y, Li Z, Xu L (2018) Research on stage creative scene model generation based on series key algorithms. In: Zhao Y, Wu TY, Chang TH, Pan JS, Jain L (eds) Advances in smart vehicular technology, transportation, communication and applications, vol 128. VTCA. Smart Innovation, Systems and Technologies, Springer, pp 170–177
 25.
Riecke BE, Veen HA, Bülthoff HH (2015) Visual homing is possible without landmarks: a path integration study in virtual reality. Presence Teleoperators Virtual Environ 11(5):443–473
 26.
Zhang F, Ding G, Lin X, Chen B, Li Z (2018) An effective method for the abnormal monitoring of stage performance based on visual sensor network. Int J Distrib Sens Netw 14(4):1–11
 27.
Shi X, Liu Y, Zhang D (2015) Human body motion recognition method based on key frames. J Syst Simul 27(10):2401–2408
 28.
Qin QIN, Yanwei LI (2014) Realtime recognition system of human gestures based on DSP[J]. Electron Technol Appl 40(7):75–78
 29.
Zhang F, Wu TY, Zheng G (2019) Video salient region detection model based on wavelet transform and feature comparison. EURASIP J Image Video Process. https://doi.org/10.1186/s1364001904552
 30.
Zhang R, Cao S (2019) Realtime human motion behavior detection via CNN using mmWave radar. IEEE Sensors Lett 3(2):3500104
 31.
Li Z, Zheng Z, Lin F, Leung H, Li Q (2019) Action recognition from depth sequence using depth motion mapsbased local ternary patterns and CNN. Multimed Tools Appl 78(14):19587–19601
 32.
Murad A, Pyun JY (2017) Deep recurrent neural networks for human activity recognition. Sensors 17(11):2556
 33.
Li C, Lu Y, Wu J, Zhang Y, Xia Z, Wang T, Yu D, Chen X, Liu P, Guo J. LDA meets Word2Vec: a novel model for academic abstract clustering. In: International World Wide Web Conferences, in the 2018 web conference companion (WWW 2018). April 23–27, 2018, Lyon, France, ACM, New York, pp 1699–1706
 34.
Yu Y, Pan Z, Hu G, Mo X, Xue J (2016) Kernel dimensionality reduction method based on KLDA. J Univ Sci Technol China 9:749–756
 35.
Zamani B, A A, Nasersharif B (2014) Evolutionary combination of kernels for nonlinear feature transformation. Inf Sci 274:95–107
 36.
Jindal A, Dua A, Kaur K, Singh M, Kumar N, Mishra S (2016) Decision tree and svmbased data analytics for theft detection in smart grid. IEEE Trans Ind Inform 12(3):1005–1016
 37.
AslahiShahri BM, Rahmani R, Chizari M, Maralani A, Eslami M, Golkar MJ, Ebrahimi A (2016) A hybrid method consisting of GA and SVM for intrusion detection system. Neural Comput Appl 27(6):1669–1676
 38.
Rostami A, Masoudi M, GhaderiArdakani A, Arabloo M, Amani M (2016) Effective thermal conductivity modeling of sandstones: SVM framework analysis. Int J Thermophys 37(6):59
 39.
Narang S, Best A, Feng A, Kang S, Manocha D, Shapiro A (2017) Motion recognition of self and others on realistic 3D avatars. Comput Anim Virtual Worlds. https://doi.org/10.1002/cav.1762
 40.
Wu JM, Tsai MH, Huang YZ, Islam SH, Hassan MM, Alelaiwi A, Fortino G (2019) Applying an ensemble convolutional neural network with SavitzkyGolay filter to construct a phonocardiogram prediction model. Appl Soft Comput 78:29–40
Acknowledgements
The authors thank the handled editor for a great support and all reviewers’ careful reviewing and constructive suggestions.
Authors’ information
Fuquan Zhang received the Ph.D. degree in School of Computer Science & Technology, Beijing Institute of Technology, China in 2019. Currently, he is a professor of Minjiang University, China. He has received silver medal of the 6.18 cross strait staff innovation exhibition, gold medal of nineteenth National Invention Exhibition in 2010. In 2012, his proposed project has won the gold award of the seventh international invention exhibition. He was awarded the “top ten inventor of Fuzhou” honorary title by Fuzhou, China. He is now a director of Fujian Artificial Intelligence Society. His research interests include artificial intelligence and computer vision.
TsuYang Wu received the Ph.D. degree in Department of Mathematics, National Changhua University of Education, Taiwan in 2010. Currently, he is an associate professor in College of Computer Science and Engineering, Shandong University of Science and Technology, China. In the past, he is an assistant professor in Innovative Information Industry Research Center at Shenzhen Graduate School, Harbin Institute of Technology. He serves as executive editor in Journal of Network Intelligence and as associate editor in Data Science and Pattern Recognition. His research interests include artificial intelligence and information security.
JengShyang Pan received the Ph.D. degree in Electrical Engineering from the University of Edinburgh, U.K. in 1996. Currently, he is the Director of the Fujian Provincial Key Lab of Big Data Mining and Applications, the Dean in College of Information Science and Engineering, and an Assistant President at Fujian University of Technology, China. He is the IET Fellow, UK and was offered Thousand Talent Program in China in 2010. His research interests include artificial intelligence, pattern recognition, and computer vision.
Gangyi Ding is professor, doctoral tutor. He received the Ph.D. degree from Beijing Institute of Technology, China in 1993. In December 2008, he served as Dean of the School of Software, Beijing Institute of Technology. He was hired as a member of the General Technology Department’s Simulation Technology Expert Group, Vice Chairman of the China Computer Simulation Association, Editor of the Computer Simulation Magazine, Member of the Quality and Reliability Expert Group of the National Defense Science and Technology Commission, National 863 Information Technology Specialist, Beijing Multimedia Public Service platform experts, etc. In 2011, as the leader, the Ministry of Education approved the “Digital Performance” of the Ministry of Education to set up an interdisciplinary discipline. In 2008, he was awarded the title of Olympic Liberation Model, Beijing Mass Economic and Technological Innovation Model, and Beijing Education Innovation Model Award by the Beijing Federation of Trade Unions. In 2009, he was awarded the “Top Ten Capital Education News Figures”. In 2010, he was awarded the title of Beijing Advanced Worker. He won the “Support for Contribution Unit Award” and “Innovation Achievement Award” for the National Day of the Capital.
Zuoyong Li Ph.D., Professor, Executive Deputy Director of Information Processing and Intelligent Control Key Laboratory of Fujian Province, Director of Ehealth Research Center of Internet Innovation Institute of Minjiang College, and Executive Director of Fujian Artificial Intelligence Society. In July 2010, he received a Ph.D. degree in computer application from Nanjing University of Science and Technology. He is mainly engaged in image processing, pattern recognition, and machine learning. Selected as the 2013 Outstanding Youth Research Talents Cultivation Program of Fujian Province and the 2015 New Century Excellent Talents Supporting Program of Fujian Province University. In 2015, he was selected as the Young Scholar Program of Minjiang College, and won the 2013 Fuzhou Education System Advanced Worker and Fuzhou City in 2014. The title of advanced educator.
Funding
This work was supported by the Research Program Foundation of Minjiang University under Grants No. MYK17021, MYK18033, MJW201831408, and No. MJW201833313 and supported by the Major Project of Sichuan Province Key Laboratory of Digital Media Art under Grants No. 17DMAKL01 and supported by Fujian Province Guiding Project under Grants No. 2018H0028. We also acknowledge the solution from National Natural Science Foundation of China (61772254 and 61871204), Key Project of College Youth Natural Science Foundation of Fujian Province (JZ160467), Fujian Provincial Leading Project (2017H0030), Fuzhou Science and Technology Planning Project (2016S116), Program for New Century Excellent Talents in Fujian Province University (NCETFJ) and Program for Young Scholars in Minjiang University (Mjqn201601).
Author information
Affiliations
Contributions
FZ and TYW design the flowchart and main algorithms. Meanwhile, they finish the revise works. JSP designs the experimental environment. GD analyzes the previous related works. ZL analyzes the experimental results. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhang, F., Wu, T., Pan, J. et al. Human motion recognition based on SVM in VR art media interaction environment. Hum. Cent. Comput. Inf. Sci. 9, 40 (2019). https://doi.org/10.1186/s1367301902038
Received:
Accepted:
Published:
Keywords
 Human motion recognition
 Virtual reality
 Interactive technology
 Support vector machine
 Linear decision