Publication:
Learning and Recognizing Human Action from Skeleton Movement with Deep Residual Neural Networks

dc.affiliation.dptoUC3M. Departamento de Informáticaes
dc.affiliation.grupoinvUC3M. Grupo de Investigación: Inteligencia Artificial Aplicada (GIAA)es
dc.contributor.authorPham, Huy-Hieu
dc.contributor.authorKhoudour, Louahdi
dc.contributor.authorCrouzil, Alain
dc.contributor.authorZegers, Pablo
dc.contributor.authorVelastin Carroza, Sergio Alejandro
dc.date.accessioned2019-09-30T08:59:05Z
dc.date.available2019-09-30T08:59:05Z
dc.date.issued2017-07-11
dc.descriptionThis paper has been presented at 8th International Conference of Pattern Recognition Systems (ICPRS 2017).en
dc.description.abstractAutomatic human action recognition is indispensable for almost artificial intelligent systems such as video surveillance, human-computer interfaces, video retrieval, etc. Despite a lot of progresses, recognizing actions in a unknown video is still a challenging task in computer vision. Recently, deep learning algorithms has proved its great potential in many vision-related recognition tasks. In this paper, we propose the use of Deep Residual Neural Networks (ResNets) to learn and recognize human action from skeleton data provided by Kinect sensor. Firstly, the body joint coordinates are transformed into 3D-arrays and saved in RGB images space. Five different deep learning models based on ResNet have been designed to extract image features and classify them into classes. Experiments are conducted on two public video datasets for human action recognition containing various challenges. The results show that our method achieves the state-of-the-art performance comparing with existing approachesen
dc.description.sponsorshipThis work was supported by the Cerema Research Center and Universidad Carlos III de Madrid. Sergio A. Velastin has received funding from the European Unions Seventh Framework Programme for Research, Technological Development and demonstration under grant agreement No 600371, el Ministerio de Economía, Industria y Competitividad (COFUND2013-51509) el Ministerio de Educación, cultura y Deporte (CEI-15-17) and Banco Santander.en
dc.format.extent6
dc.identifier.bibliographicCitationPham, H.H., Khoudour, L., Crouzil, A., Zegers, P. y Velastin, S.A. (2017). Learning and recognizing human action from Skeleton movement with deep residual neural networks. In 8th ​​International Conference of Pattern Recognition Systems (ICPRS 2017).es
dc.identifier.doihttps://doi.org/10.1049/cp.2017.0154
dc.identifier.isbn978-1-78561-652-5
dc.identifier.publicationtitle8th International Conference of Pattern Recognition Systems (ICPRS 2017)en
dc.identifier.urihttps://hdl.handle.net/10016/28918
dc.identifier.uxxiCC/0000027464
dc.language.isoengen
dc.publisherThe Institution Of Engineering And Technologyen
dc.relation.eventdate11-13 July 2017en
dc.relation.eventplaceMadrid, España.es
dc.relation.eventtitle8th International Conference of Pattern Recognition Systems (ICPRS 2017)en
dc.relation.projectIDGobierno de España. COFUND2013-51509es
dc.relation.projectIDGobierno de España. CEI-15-17es
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/H2020/600371en
dc.rights© 2017 IEEE.es
dc.rights.accessRightsopen accessen
dc.subject.ecienciaInformáticaes
dc.subject.otherAction recognitionen
dc.subject.otherResNeten
dc.subject.otherSkeletonen
dc.subject.otherKinecten
dc.titleLearning and Recognizing Human Action from Skeleton Movement with Deep Residual Neural Networksen
dc.typeconference paper*
dc.type.hasVersionAM*
dspace.entity.typePublication
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
learning_ICPRS_2017_ps.pdf
Size:
504.24 KB
Format:
Adobe Portable Document Format