SiMPE 2014 - Workshop on Speech and Sound in Mobile and Pervasive Environments (SiMPE)
Topics/Call fo Papers
The SiMPE workshop series started in 2006 with the goal of enabling speech processing on mobile and embedded devices. The SiMPE 2013 and 2012 workshop extended the notion of audio to non-speech Sounds and thus the expansion became Speech and Sound. SiMPE 2010 and 2011 brought together researchers from the speech and the HCI communities. Speech User interaction in cars was a focus area in 2009. Multimodality got more attention in SiMPE 2008. In SiMPE 2007, the focus was on developing regions.
With SiMPE 2014, the 9th in the series, we continue to explore the area of speech along with sound. Akin to language processing and text-to-speech synthesis in the voice-driven interaction loop, sensors can track continuous human activities such as singing, walking, or shaking the mobile phone, and non-speech audio can facilitate continuous interaction. The technologies underlying speech processing and sound processing are quite different and these communities have been working mostly independent of each other. And yet, for multimodal interactions on the mobile, it is perhaps natural to ask whether and how speech and sound can be mixed and used more effectively and naturally.
Goals of SiMPE
SiMPE has only two ambitious goals:
To provide a platform that brings together researchers from speech processing, sound design, algorithm design, application development and UI design to fuel faster growth of this multi-disciplinary area.
To pose interesting problems to this community that will foster cross-pollination of ideas and hopefully de-fine the course that SiMPE research should take over the coming years.
Intended Audience
This multi-disciplinary burgeoning area invites researchers interested in any aspect of the intersection of -- Speech processing, Sound interaction and Mobile computing, speech recognition, speech synthesis, multimodal interfaces, mobile HCI, mobile applications, voice user interface design, memory/energy efficient algorithms, UI design -- to meet and pave the way forward. We anticipate a good mix of international industrial and academic participation which should lead to lively discussions.
With SiMPE 2014, the 9th in the series, we continue to explore the area of speech along with sound. Akin to language processing and text-to-speech synthesis in the voice-driven interaction loop, sensors can track continuous human activities such as singing, walking, or shaking the mobile phone, and non-speech audio can facilitate continuous interaction. The technologies underlying speech processing and sound processing are quite different and these communities have been working mostly independent of each other. And yet, for multimodal interactions on the mobile, it is perhaps natural to ask whether and how speech and sound can be mixed and used more effectively and naturally.
Goals of SiMPE
SiMPE has only two ambitious goals:
To provide a platform that brings together researchers from speech processing, sound design, algorithm design, application development and UI design to fuel faster growth of this multi-disciplinary area.
To pose interesting problems to this community that will foster cross-pollination of ideas and hopefully de-fine the course that SiMPE research should take over the coming years.
Intended Audience
This multi-disciplinary burgeoning area invites researchers interested in any aspect of the intersection of -- Speech processing, Sound interaction and Mobile computing, speech recognition, speech synthesis, multimodal interfaces, mobile HCI, mobile applications, voice user interface design, memory/energy efficient algorithms, UI design -- to meet and pave the way forward. We anticipate a good mix of international industrial and academic participation which should lead to lively discussions.
Other CFPs
Last modified: 2014-05-16 22:55:14