Publication: Desarrollo de una app para personas invidentes de reconocimiento de emociones a partir de expresiones faciales
Loading...
Identifiers
Publication date
2017-09
Defense date
2017-10-10
Authors
Advisors
Tutors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Gracias a los avances en diversos campos de la ciencia e ingeniería (biomedicina, mecánica,
informática...) se ha logrado que personas que sufren distintas y complejas discapacidades obtengan
una notable mejora en su calidad de vida. Prótesis, audífonos, órganos artificiales, sensores, son solo
algunos ejemplos que nos ha dado la tecnología para ayudar a personas con discapacidades muy
diversas.
En este proyecto nos centraremos en la discapacidad que sufren más de 55.000 personas en nuestro
país, la ceguera. Analizaremos el estado del arte sobre la tecnología destinada a este tipo de
discapacidad, a su naturaleza y previsiones en el número de afectados. Nos centraremos en el campo
de la telefonía móvil. Estos dispositivos son cada vez más sofisticados y disponibles a más parte de la
población, eso que lo convierte en un elemento crucial para facilitar la vida de cualquier
discapacitado, no sólo invidentes.
El presente proyecto bebe de las tecnologías potenciadas en las últimas décadas relacionadas con el
análisis de sentimientos y reconocimiento de emociones. De todas las subcategorías existentes nos
focalizaremos en el análisis de expresiones faciales y eventos gestuales. Realizaremos una aplicación
móvil utilizando bibliotecas de Affectiva para crear un traductor de expresiones, apariencia física y
eventos faciales a voz. Esto permitirá a una persona invidente obtener información ambiental y visual
de su interlocutor, lo cual, debido a esta discapacidad, le resultaría imposible de adquirir de otra
forma.
Las aplicaciones que se pueden sustraer de esta herramienta son muy variadas pero todas ellas con un
punto en común: obtener información de parte de la comunicación corporal.
Esta forma de comunicación, por definición, está desligada para las personas con esta discapacidad
(salvo eventos que involucren contacto físico). Es por ello que se abre un abanico de posibilidades
muy disperso. A modo de ejemplo: en el ámbito familiar sería posible obtener información de los
gestos faciales que realiza nuestro interlocutor al reaccionar sobre un evento. En un ámbito laboral
sería posible obtener información de forma independiente de la apariencia de nuestro interlocutor lo
que conllevaría, por ejemplo, a tomar medidas en la manera de expresarnos de ahí en adelante.
En definitiva las ventajas rondarán a toda aquella información que las personas sin discapacidades
visuales severas podemos obtener de nuestros interlocutores sin mediar palabra, simplemente
observando la apariencia y gesticulación facial.
Dado que esta discapacidad es universal y no distingue entre nacionalidades ni lenguajes, la
aplicación se ha traducido a ocho idiomas.
hanks to the progress on many fields of science and engineering (such as biomedical, mechanics, computing…) it has been possible to achieve that people who suffer from many kinds and complex disabilities get a significant improvement on their quality of life. Prothesis, hearing aids, artificial organs, sensors… they are some few examples of what technology have given us for helping people with different disabilities. In this project we are going to study the disability suffered from more than 55.000 people in our country: the blindness. We are going to analyze the state of the art of technology used on this disability, its nature and predictions on the number of patients. We are going to focus on the mobile phone’s field. These devices are more and more sophisticated and available for the population, that makes it an important factor for making easier the life of any disabled. The current project feeds from the lately strengthened technologies used on sentiment analysis and emotion recognition. Among all existing subcategories, we are going to focus on facial expressions analysis and gestural events. We will build an app using resources from Affectiva for creating a translator of expressions, physical appearance and facial gestural events to voice. This will allow to a blind person to get ambiental and visual information of his interlocutor, it would be impossible to get otherwise due to his disability. The applications that anyone can obtain from this tool are very diverse, but all of them have a point in common: to obtain information from part of the body communication. This way of communicate, by definition, is indifferent to blind people (except those that involve physical contact). It is therefore a wide range of possibilities. For example: In a familiar environment it would be possible to extract information from facial gestures done by our interlocutor when he reacts over some event. In a work environment, it would be possible to get information independently from, for example, the appearance of our client, which can leads to take different strategies in our speech form then to on. Definitely, the advantages will go round the idea of get every kind of information that a person without any severe visual disability can get from his interlocutor without verbal communication, only watching to his appearance and facial gesture. Given that this disability is universal and it does not distinguish between languages, the app has been translated into eight languages.
hanks to the progress on many fields of science and engineering (such as biomedical, mechanics, computing…) it has been possible to achieve that people who suffer from many kinds and complex disabilities get a significant improvement on their quality of life. Prothesis, hearing aids, artificial organs, sensors… they are some few examples of what technology have given us for helping people with different disabilities. In this project we are going to study the disability suffered from more than 55.000 people in our country: the blindness. We are going to analyze the state of the art of technology used on this disability, its nature and predictions on the number of patients. We are going to focus on the mobile phone’s field. These devices are more and more sophisticated and available for the population, that makes it an important factor for making easier the life of any disabled. The current project feeds from the lately strengthened technologies used on sentiment analysis and emotion recognition. Among all existing subcategories, we are going to focus on facial expressions analysis and gestural events. We will build an app using resources from Affectiva for creating a translator of expressions, physical appearance and facial gestural events to voice. This will allow to a blind person to get ambiental and visual information of his interlocutor, it would be impossible to get otherwise due to his disability. The applications that anyone can obtain from this tool are very diverse, but all of them have a point in common: to obtain information from part of the body communication. This way of communicate, by definition, is indifferent to blind people (except those that involve physical contact). It is therefore a wide range of possibilities. For example: In a familiar environment it would be possible to extract information from facial gestures done by our interlocutor when he reacts over some event. In a work environment, it would be possible to get information independently from, for example, the appearance of our client, which can leads to take different strategies in our speech form then to on. Definitely, the advantages will go round the idea of get every kind of information that a person without any severe visual disability can get from his interlocutor without verbal communication, only watching to his appearance and facial gesture. Given that this disability is universal and it does not distinguish between languages, the app has been translated into eight languages.
Description
Keywords
Desarrollo de software, Biología computacional, Psicología, Reconocimiento de emociones, Inteligencia artificial, Discapacidad visual, Reconocimiento facial, Detección facial, Emociones