Publication:
Multi-view human action recognition using 2D motion templates based on MHIs and their HOG description

dc.affiliation.dptoUC3M. Departamento de Informáticaes
dc.affiliation.grupoinvUC3M. Grupo de Investigación: Inteligencia Artificial Aplicada (GIAA)es
dc.contributor.authorMurtaza, Fiza
dc.contributor.authorYousaf, Muhammad Haroon
dc.contributor.authorVelastin Carroza, Sergio Alejandro
dc.date.accessioned2018-04-02T08:40:56Z
dc.date.available2018-04-02T08:40:56Z
dc.date.issued2016-10-01
dc.description.abstractIn this study, a new multi-view human action recognition approach is proposed by exploiting low-dimensional motion information of actions. Before feature extraction, pre-processing steps are performed to remove noise from silhouettes, incurred due to imperfect, but realistic segmentation. Two-dimensional motion templates based on motion history image (MHI) are computed for each view/action video. Histograms of oriented gradients (HOGs) are used as an efficient description of the MHIs which are classified using nearest neighbor (NN) classifier. As compared with existing approaches, the proposed method has three advantages: (i) does not require a fixed number of cameras setup during training and testing stages hence missing camera-views can be tolerated, (ii) requires less memory and bandwidth requirements and hence (iii) is computationally efficient which makes it suitable for real-time action recognition. As far as the authors know, this is the first report of results on the MuHAVi-uncut dataset having a large number of action categories and a large set of camera-views with noisy silhouettes which can be used by future workers as a baseline to improve on. Experimentation results on multi-view with this dataset gives a high-accuracy rate of 95.4% using leave-one-sequence-out cross-validation technique and compares well to similar state-of-the-art approachesen
dc.description.sponsorshipSergio A Velastin acknowledges the Chilean National Science and Technology Council (CONICYT) for its funding under grant CONICYT-Fondecyt Regular no. 1140209 (“OBSERVE”). He is currently funded by the Universidad Carlos III de Madrid, the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement nº 600371, el Ministerio de Economía y Competitividad (COFUND2013-51509) and Banco Santander.en
dc.format.extent24
dc.format.mimetypeapplication/pdf
dc.identifier.bibliographicCitationMurtaza, F., Haroon Yousaf, M., Velastin, S.A., (2016). Multi-view Human Action Recognition using 2D Motion Templates based on MHIs and their HOG Description. IET Computer Vision, 10 (7), pp. 758-767, Oct. 2016.
dc.identifier.doihttps://doi.org/10.1049/iet-cvi.2015.0416
dc.identifier.issn1751-9632
dc.identifier.publicationfirstpage758
dc.identifier.publicationissue7
dc.identifier.publicationlastpage767
dc.identifier.publicationtitleInstitution of Engineering and Technology Computer visionen
dc.identifier.publicationvolume10
dc.identifier.urihttps://hdl.handle.net/10016/26578
dc.identifier.uxxiAR/0000019326
dc.language.isoeng
dc.publisherIETen
dc.relation.projectIDGobierno de España. COFUND2013-51509es
dc.relation.projectIDinfo: eu-repo/grantAgreement/EC/FP7/600371
dc.rights© 2016 IEEE
dc.rights.accessRightsopen access
dc.subject.othercamerasen
dc.subject.otherimage classificationen
dc.subject.otherimage segmentationen
dc.subject.othermotion estimationen
dc.subject.othervideo signal processingen
dc.titleMulti-view human action recognition using 2D motion templates based on MHIs and their HOG descriptionen
dc.typeresearch article*
dc.type.hasVersionVoR*
dspace.entity.typePublication
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
multiview_IET_2016_ps.pdf
Size:
830.19 KB
Format:
Adobe Portable Document Format