ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

ICMI-MLMI 2009 - The Eleventh International Conference on Multimodal Interfaces and The Sixth Workshop on Machine Learning for Multimodal Interfaces ICMI-MLMI 2009

Date2009-11-02

Deadline2009-05-01

VenueMA, USA - United States USA - United States

Keywords

Websitehttp://icmi2009.acm.org/CFP.html

Topics/Call fo Papers

ICMI-MLMI 2009

Cambridge, MA, USA,
November 2-6 2009
sponsored by ACM SIGCHI

The Eleventh International Conference on Multimodal Interfaces and The Sixth Workshop on Machine Learning for Multimodal Interfaces will jointly take place in Boston during November 2-6, 2009. The main aim of ICMI-MLMI 2009 is to further scientific research within the broad field of multimodal interaction, methods and systems. The joint conference will focus on major trends and challenges in this area, and work to identify a roadmap for future research and commercial success. ICMI-MLMI 2009 will feature a single-track main conference with keynote speakers, panel discussions, technical paper presentations, poster sessions, and demonstrations of state of the art multimodal systems and concepts. It will be followed by workshops.

Venue:
The conference will take place at the MIT Media Lab, widely known for its innovative spirit. Organized in Boston, Massachusetts, USA, ICMI-MLMI 2009 provides an excellent setting for brainstorming and sharing the latest advances in multimodal interaction, systems and methods in an inspired setting of a city, known as one of the top historical, technological and scientific centers of the US.

Important dates:
Workshop proposals May 1st , 2009
Paper submission May 29th, 2009
Author notification July, 2009
Camera-ready due August, 2009
Conference Nov 2-4, 2009
Workshops Nov 5-6, 2009
Topics of interest:
Multimodal and multimedia processing:
Algorithms for multimodal fusion and multimedia fission
Multimodal output generation and presentation planning
Multimodal discourse and dialogue modeling
Generating non-verbal behaviors for embodied conversational agents
Machine learning methods for multimodal processing
Multimodal input and output interfaces:
Gaze and vision-based interfaces
Speech and conversational interfaces
Pen-based interfaces
Haptic interfaces
Interfaces to virtual environments or augmented reality
Biometric interfaces combining multiple modalities
Adaptive multimodal interfaces
Multimodal applications:
Mobile interfaces
Meeting analysis and intelligent meeting spaces
Interfaces to media content and entertainment
Human-robot interfaces and human-robot interaction
Vehicular applications and navigational aids
Computer-mediated human to human communication
Interfaces for intelligent environments and smart living spaces
Universal access and assistive computing
Multimodal indexing, structuring and summarization
Human interaction analysis and modeling:
Modeling and analysis of multimodal human-human communication
Audio-visual perception of human interaction
Analysis and modeling of verbal and non-verbal interaction
Cognitive modeling of users of interactive systems
Multimodal data, evaluation, and standards:
Evaluation techniques and methodologies for multimodal interfaces
Authoring techniques for multimodal interfaces
Annotation and browsing of multimodal data
Architectures and standards for multimodal interfaces

Last modified: 2010-06-04 19:32:22