Show simple item record

dc.contributor.authorO'Dwyer, Jonny
dc.contributor.authorFlynn, Ronan
dc.contributor.authorMurray, Niall
dc.date.accessioned2020-05-08T09:04:05Z
dc.date.available2020-05-08T09:04:05Z
dc.date.copyright2017
dc.date.issued2017-12
dc.identifier.citationO'Dwyer, J., Flynn, R., Murray, N. (2017). Continuous affect prediction using eye gaze and speech. In 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). 13-16 November. Kansas City, MO. doi: 10.1109/BIBM.2017.8217968en_US
dc.identifier.otherOther - Electronics, Computer & Software Engineering AITen_US
dc.identifier.urihttp://research.thea.ie/handle/20.500.12065/3172
dc.description.abstractAffective computing research traditionally focused on labeling a person's emotion as one of a discrete number of classes e.g. happy or sad. In recent times, more attention has been given to continuous affect prediction across dimensions in the emotional space, e.g. arousal and valence. Continuous affect prediction is the task of predicting a numerical value for different emotion dimensions. The application of continuous affect prediction is powerful in domains involving real-time audio-visual communications which could include remote or assistive technologies for psychological assessment of subjects. Modalities used for continuous affect prediction may include speech, facial expressions and physiological responses. As opposed to single modality analysis, the research community have combined multiple modalities to improve the accuracy of continuous affect prediction. In this context, this paper investigates a continuous affect prediction system using the novel combination of speech and eye gaze. A new eye gaze feature set is proposed. This novel approach uses open source software for real-time affect prediction in audio-visual communication environments. A unique advantage of the human-computer interface used here is that it does not require the subject to wear specialized and expensive eye-tracking headsets or intrusive devices. The results indicate that the combination of speech and eye gaze improves arousal prediction by 3.5% and valence prediction by 19.5% compared to using speech alone.en_US
dc.formatPDFen_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.relation.ispartof2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)en_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 Ireland*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/ie/*
dc.subjectSpeechen_US
dc.subjectEye gazeen_US
dc.subjectAffective computingen_US
dc.subjectHuman computer interfaceen_US
dc.subjectAssistive technologiesen_US
dc.titleContinuous affect prediction using eye gaze and speech.en_US
dc.typeOtheren_US
dc.description.peerreviewyesen_US
dc.identifier.conference2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). 13-16 November. Kansas City, MO
dc.identifier.doidoi: 10.1109/BIBM.2017.8217968
dc.identifier.orcidhttps://orcid.org/0000-0002-6073-567X
dc.identifier.orcidhttps://orcid.org/0000-0002-6475-005X
dc.identifier.orcidhttps://orcid.org/0000-0002-5919-0596
dc.rights.accessOpen Accessen_US
dc.subject.departmentFaculty of Engineering & Informatics AITen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 Ireland
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 Ireland