AHCI 2009 - CFP-computing that relates to, arises from, or deliberately influences emotions
- 【SCI-Expanded AHCI SSCI检索】【MDPI期刊 快速出版】审美 艺术 建筑学 作品 舞蹈 设计 画画 展示 音乐 绘画 表现 雕塑 风格 理论 期刊征稿
- SCI EI期刊 独家版面 版面有限,录满即止;
- SCI SSCI AHCI(Web of Science),Ei Compendex JA. Scopus相关主题期刊征稿
- 【SCOPUS / Web of Science (SCIE/SSCI/AHCI) /EI Compendex/ SJR / PubMed】新闻理论.新闻史.新闻业务.新闻事业经营管理.广播与电视.传播学.新闻学与传播学其他学科
- 【SCOPUS / Web of Science (SCIE/SSCI/AHCI)】【SCI-Expanded EI检索】【MDPI期刊 快速出版】 【SAGE Publications Inc 期刊 快速出版】机器人技术,创新的机器和机构,精密农业的工程应用.人工智能.计算机.仿真
Topics/Call fo Papers
Call for Papers
Rosalind Picard defined affective computing as ¡°computing that relates to, arises from, or deliberately influences emotions.¡± Along with the more recent introduction of neighboring terms such as ¡°human centered¡± or ¡°anthropocentric,¡± ¡°pervasive¡± and ¡°ubiquitous¡± computing, computers are no longer deemed as number-crunching machines, but are approached as intelligent and adaptive tools or interfaces within our habitat, helping perform everyday tasks in a more intuitive and yet robust manner. In most cases, paralinguistic concepts such as mood, attitude, traits, and expressivity can adapt the user experience and present flexible and, therefore, more suitable results.
From an engineering point of view, researchers have been investigating different modalities and combinations to provide emotion or affect recognition components. The mapping of raw signals and related features to high-level concepts has been mainly driven from claims such as Ekman\'s who states that specific facial expressions can be universally recognized across cultures and ages. This claim lends itself well to the existing algorithms which classify content into discrete categories, and as a result pushed the area into creating training and testing data sets containing only the aforementioned expressions. In addition to this, these data sets usually contain cases of extreme expressivity, since these can be distinguished more easily and make it quite difficult to extend. However, everyday human-human and human-computer interactions hardly ever contain cases of extreme expressivity or clear occurrences of expressivity ranging from a neu-tral state to the visual, aural, or physiological apexes of an expression
.
The proposed Special Issue aims to present state-of-the-art approaches in the fields of unimodal and multimodal affect analysis, and combine these with techniques that utilize a priori or just-in-time knowledge about the user, environment, or task contexts. Papers on emotion- and affect-aware applications are strongly encouraged, especially those discussing issues related to natural interaction and/or the tradeoff between unconstrained expressivity and robustness.
Topics of interest include, but are not limited to:
o Acoustic and linguistic analysis/feature extraction and recognition
o Visual (face, body, hand) analysis/feature extraction and recognition
o Uni/multimodal recognition/sensing of (blended) emotion, affect, and behavior
o Dynamic, temporal concepts, turn-taking
o Bridging feature extraction and recognition with knowledge sources and context
o Novel modalities for HCI (biosignals, haptics, etc.)
o Affect analysis ¡°in the wild¡± (e.g., public spaces, groups, etc.)
o Protocols for evaluation of affect-aware systems
o Affect- and emotion-aware applications: design and implementation
Before submission authors should carefully read over the journal\'sAuthor Guidelines, which are located at http://www.hindawi.com/journals/ahci/guidelines.ht.... Authors should follow the Advances in Human-Computer Interaction manuscript format described at the journal site http://www.hindawi.com/journals/ahci/. Prospective authors should submit an electronic copy of their complete manuscript through the Journal Manuscript Tracking System at http://www.hindawi.com/mts/, according to the following timetable:
Manuscript Due April 1, 2009
First Round of Reviews July 1, 2009
Publication Date October 1, 2009
Lead Guest Editor
o Kostas Karpouzis, Institute of Communication and Computer Systems, National Technical University of Athens, 15780 Zographou, Athens, Greece; kkarpou-AT-cs.ntua.gr
Guest Editors
o Elisabeth Andre, Multimedia Concepts and Applications, Institute of Computer Science, University of Augsburg, 86135 Augsburg, Germany; andre-AT-informatik.uni-augsburg.de
o Anton Batliner, Computer Science Department 5, Friedrich Alexander University, Erlangen-Nuremberg, 91058 Erlangen, Germany; batliner-AT-informatik.uni-erlangen.de
Rosalind Picard defined affective computing as ¡°computing that relates to, arises from, or deliberately influences emotions.¡± Along with the more recent introduction of neighboring terms such as ¡°human centered¡± or ¡°anthropocentric,¡± ¡°pervasive¡± and ¡°ubiquitous¡± computing, computers are no longer deemed as number-crunching machines, but are approached as intelligent and adaptive tools or interfaces within our habitat, helping perform everyday tasks in a more intuitive and yet robust manner. In most cases, paralinguistic concepts such as mood, attitude, traits, and expressivity can adapt the user experience and present flexible and, therefore, more suitable results.
From an engineering point of view, researchers have been investigating different modalities and combinations to provide emotion or affect recognition components. The mapping of raw signals and related features to high-level concepts has been mainly driven from claims such as Ekman\'s who states that specific facial expressions can be universally recognized across cultures and ages. This claim lends itself well to the existing algorithms which classify content into discrete categories, and as a result pushed the area into creating training and testing data sets containing only the aforementioned expressions. In addition to this, these data sets usually contain cases of extreme expressivity, since these can be distinguished more easily and make it quite difficult to extend. However, everyday human-human and human-computer interactions hardly ever contain cases of extreme expressivity or clear occurrences of expressivity ranging from a neu-tral state to the visual, aural, or physiological apexes of an expression
.
The proposed Special Issue aims to present state-of-the-art approaches in the fields of unimodal and multimodal affect analysis, and combine these with techniques that utilize a priori or just-in-time knowledge about the user, environment, or task contexts. Papers on emotion- and affect-aware applications are strongly encouraged, especially those discussing issues related to natural interaction and/or the tradeoff between unconstrained expressivity and robustness.
Topics of interest include, but are not limited to:
o Acoustic and linguistic analysis/feature extraction and recognition
o Visual (face, body, hand) analysis/feature extraction and recognition
o Uni/multimodal recognition/sensing of (blended) emotion, affect, and behavior
o Dynamic, temporal concepts, turn-taking
o Bridging feature extraction and recognition with knowledge sources and context
o Novel modalities for HCI (biosignals, haptics, etc.)
o Affect analysis ¡°in the wild¡± (e.g., public spaces, groups, etc.)
o Protocols for evaluation of affect-aware systems
o Affect- and emotion-aware applications: design and implementation
Before submission authors should carefully read over the journal\'sAuthor Guidelines, which are located at http://www.hindawi.com/journals/ahci/guidelines.ht.... Authors should follow the Advances in Human-Computer Interaction manuscript format described at the journal site http://www.hindawi.com/journals/ahci/. Prospective authors should submit an electronic copy of their complete manuscript through the Journal Manuscript Tracking System at http://www.hindawi.com/mts/, according to the following timetable:
Manuscript Due April 1, 2009
First Round of Reviews July 1, 2009
Publication Date October 1, 2009
Lead Guest Editor
o Kostas Karpouzis, Institute of Communication and Computer Systems, National Technical University of Athens, 15780 Zographou, Athens, Greece; kkarpou-AT-cs.ntua.gr
Guest Editors
o Elisabeth Andre, Multimedia Concepts and Applications, Institute of Computer Science, University of Augsburg, 86135 Augsburg, Germany; andre-AT-informatik.uni-augsburg.de
o Anton Batliner, Computer Science Department 5, Friedrich Alexander University, Erlangen-Nuremberg, 91058 Erlangen, Germany; batliner-AT-informatik.uni-erlangen.de
Other CFPs
- The 2009 Chinese Control and Decision Conference (2009 CCDC)
- 7th IEEE Workshop on Model-Based Development for ComputerBased Systems - Linking Views and Levels of Abstractions MBD2009 within the 16th Annual IEEE International Conference and Workshop on the Engineering of Computer Based Systems (ECBS)
- third Global Waste Management Symposium (GWMS)
- international conference focused on e-learning in the workplace ICELW 2009
- IEEE 17th Signal Processing and Communications Applications Conference (SIU-2009)
Last modified: 2010-06-04 19:32:22