ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

AVEC 2013 - 3rd International Audio/Visual Emotion Challenge and Workshop

Date2013-10-21 - 2013-10-25

Deadline2013-07-01

VenueCatalunya, Spain Spain

Keywords

Websitehttps://sspnet.eu/avec2013

Topics/Call fo Papers

The Audio/Visual Emotion Challenge and Workshop (AVEC 2012) will be the second competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and audiovisual emotion analysis, with all participants competing under strictly the same conditions. The goal of the challenge is to provide a common benchmark test set for individual multimodal information processing and to bring together the audio and video emotion recognition communities, to compare the relative merits of the two approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance emotion recognition systems to be able to deal with naturalistic behavior in large volumes of un-segmented, non-prototypical and non-preselected data as this is exactly the type of data that both multimedia retrieval and human-machine/human-robot communication interfaces have to face in the real world.
We are calling for teams to participate in emotion recognition from acoustic audio analysis, linguistic audio analysis, video analysis, or any combination of these. As benchmarking database the SEMAINE database of naturalistic video and audio of human-agent interactions, along with labels for four affect dimensions will be used. Emotion will have to be recognized in terms of continuous time, continuous valued dimensional affect in four dimensions: arousal, expectation, power and valence. Two sub-challenges are organised: The first involves fully continuous affect recognition, where the level of affect has to be predicted for every moment of the recording. The second sub-challenge requires participants to predict the level of affect at word-level, that is, only when the user is speaking.
Besides participation in the Challenge we are calling for papers addressing the overall topics of this workshop, in particular works that address the differences between audio and video processing of emotive data, and the issues concerning combined audio-visual emotion recognition.
Program Committee
Elisabeth André, Universität Augsburg, Germany
Anton Batliner, Universität Erlangen-Nuremberg, Germany
Felix Burkhardt, Deutsche Telekom, Germany
Rama Chellappa, University of Maryland, USA
Mohamed Chetouani, Institut des Systèmes Intelligents et de Robotique (ISIR), Fance
Jeff Cohn, University of Pittsburgh/Carnegie Mellon University, USA
Laurence Devillers, Laboratoire d’Informatique pour la Mécanique et les Sciences de l’Ingénieur (LIMSI), France
Julien Epps, University of New South Wales, Australia
Roland Göcke, Australian National University, Australia
Hatice Gunes, Queen Mary University London, UK
Aleix Martinez, Ohio State University, USA
Marc Méhu, University of Geneva, Switzerland
Louis-Philippe Morency, University of Southern California, USA
Marcello Mortillaro, University of Geneva, Switzerland
Stefan Scherer, University of Southern California, USA
Stefan Steidl, Uinversität Erlangen-Nuremberg, Germany
Jianhua Tao, Chinese Academy of Sciences, China
Fernando de la Torre, Carnegie Mellon University, USA
Stefanos Zafeiriou, Imperial College London, UK

Last modified: 2013-03-11 07:41:52