Show simple item record

dc.contributor.authorO'Dwyer, Jonny
dc.date.accessioned2020-04-14T08:38:34Z
dc.date.available2020-04-14T08:38:34Z
dc.date.copyright2019-09
dc.date.issued2019-12
dc.identifier.citationO'Dwyer, J. (2019) Speech, Head, and Eye-based Cues for Continuous Affect Prediction, in 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), IEEE, Sep. 2019, pp. 16–20, isbn: 978-1-7281-3891-6. doi: 10.1109/ACIIW.2019.8925042en_US
dc.identifier.isbn9781728138916
dc.identifier.otherConferences - Electronics, Computer & Software Engineering - AITen_US
dc.identifier.urihttp://research.thea.ie/handle/20.500.12065/3091
dc.description.abstractContinuous affect prediction involves the discrete time-continuous regression of affect dimensions. Researchers in this domain are currently embracing multimodal model input. This provides motivation for researchers to investigate previously unexplored affective cues. Speech-based cues have traditionally received the most attention for affect prediction, however, nonverbal inputs have significant potential to increase the performance of affective computing systems and enable affect modelling in the absence of speech. Non-verbal inputs that have received little attention for continuous affect prediction include head and eye-based cues. Both head and eye-based cues are involved in emotion displays and perception. Additionally, these cues can be estimated non-intrusively from video, using computer vision tools. This work exploits this gap by comprehensively investigating head and eye-based features and their combination with speech for continuous affect prediction. Hand-crafted, automatically generated and convolutional neural network (CNN) learned features from these modalities will be investigated for continuous affect prediction. The highest performing feature set combinations will answer how effective these features are for the prediction of an individual's affective state.en_US
dc.formatPDFen_US
dc.language.isoenen_US
dc.publisherIEEE Xploreen_US
dc.relation.ispartof2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)en_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 Ireland*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/ie/*
dc.subjectSpeechen_US
dc.subjectHead poseen_US
dc.subjectEyesen_US
dc.subjectAffective computingen_US
dc.subjectFeature engineeringen_US
dc.titleSpeech, head, and eye-based cues for continuous affect prediction.en_US
dc.typeOtheren_US
dc.identifier.conference2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) September 3rd-16th 2019 Cambridge, UK.
dc.identifier.orcidhttps://orcid.org/0000-0002-6073-567X
dc.rights.accessOpen Accessen_US
dc.subject.departmentFaculty of Engineering & Informatics AITen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 Ireland
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 Ireland