Publication: Bag of Deep Features for Instructor Activity Recognition in Lecture Room
dc.affiliation.dpto | UC3M. Departamento de Informática | es |
dc.affiliation.grupoinv | UC3M. Grupo de Investigación: Inteligencia Artificial Aplicada (GIAA) | es |
dc.contributor.author | Nida, Nudrat | |
dc.contributor.author | Yousaf, Muhammad Haroon | |
dc.contributor.author | Irtaza, Aun | |
dc.contributor.author | Velastin Carroza, Sergio Alejandro | |
dc.contributor.funder | European Commission | en |
dc.contributor.funder | Ministerio de Economía y Competitividad (España) | es |
dc.contributor.funder | Ministerio de Educación, Cultura y Deporte (España) | es |
dc.date.accessioned | 2019-09-26T10:02:53Z | |
dc.date.available | 2019-09-26T10:02:53Z | |
dc.date.issued | 2019-01 | |
dc.description | This paper has been presented at : 25th International Conference on MultiMedia Modeling (MMM2019) | en |
dc.description.abstract | This research aims to explore contextual visual information in the lecture room, to assist an instructor to articulate the effectiveness of the delivered lecture. The objective is to enable a self-evaluation mechanism for the instructor to improve lecture productivity by understanding their activities. Teacher’s effectiveness has a remarkable impact on uplifting students performance to make them succeed academically and professionally. Therefore, the process of lecture evaluation can significantly contribute to improve academic quality and governance. In this paper, we propose a vision-based framework to recognize the activities of the instructor for self-evaluation of the delivered lectures. The proposed approach uses motion templates of instructor activities and describes them through a Bag-of-Deep features (BoDF) representation. Deep spatio-temporal features extracted from motion templates are utilized to compile a visual vocabulary. The visual vocabulary for instructor activity recognition is quantized to optimize the learning model. A Support Vector Machine classifier is used to generate the model and predict the instructor activities. We evaluated the proposed scheme on a self-captured lecture room dataset, IAVID-1. Eight instructor activities: pointing towards the student, pointing towards board or screen, idle, interacting, sitting, walking, using a mobile phone and using a laptop, are recognized with an 85.41% accuracy. As a result, the proposed framework enables instructor activity recognition without human intervention. | en |
dc.description.sponsorship | Sergio A Velastin has received funding from the Universidad Carlos III de Madrid, the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 600371, el Ministerio de Economía, Industria y Competitividad (COFUND2014-51509) el Ministerio de Educación, Cultura y Deporte (CEI-15-17) and Banco Santander. | en |
dc.format.extent | 12 | |
dc.identifier.bibliographicCitation | Nida, N., Yousaf, M.H., Irtaza, A. y Velastin, S.A. (2019). Bag of Deep Features for Instructor Activity Recognition in Lecture Room. In MultiMedia Modeling,11296, pp. 481-492. | en |
dc.identifier.doi | https://doi.org/10.1007/978-3-030-05716-9_39 | |
dc.identifier.isbn | 978-3-030-05715-2 | |
dc.identifier.publicationfirstpage | 481 | |
dc.identifier.publicationlastpage | 492 | |
dc.identifier.publicationtitle | MultiMedia Modeling | en |
dc.identifier.publicationvolume | 11296 | |
dc.identifier.uri | https://hdl.handle.net/10016/28908 | |
dc.identifier.uxxi | CC/0000029976 | |
dc.language.iso | eng | en |
dc.publisher | Springer | en |
dc.relation.eventdate | 08-11 January 2019 | en |
dc.relation.eventplace | Thessaloniki, Greece | en |
dc.relation.eventtitle | 25th International Conference on MultiMedia Modeling (MMM2019) | en |
dc.relation.projectID | info:eu-repo/grantAgreement/EC/H2020/600371 | en |
dc.relation.projectID | Gobierno de España. COFUND2013-51509 | es |
dc.relation.projectID | Gobierno de España. CEI-15-17 | es |
dc.rights | © Springer Nature Switzerland AG 2019 | en |
dc.rights.accessRights | open access | en |
dc.subject.eciencia | Informática | es |
dc.subject.other | Human activity recognition | en |
dc.subject.other | Instructor activity recognition | en |
dc.subject.other | Motion templates | en |
dc.subject.other | Academic quality assurance | en |
dc.title | Bag of Deep Features for Instructor Activity Recognition in Lecture Room | en |
dc.type | conference paper | * |
dc.type.hasVersion | AM | * |
dspace.entity.type | Publication |
Files
Original bundle
1 - 1 of 1