ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

CrowdMM 2012 - Workshop on Crowdsourcing for Multimedia

Date2012-10-29

Deadline2012-06-11

VenueNara, Japan Japan

Keywords

Websitehttp://www.acmmm12.org/workshops/

Topics/Call fo Papers

Crowdsourcing--leveraging a large number of human contributors and the capabilities of human computation--has enormous potential to address key challenges in the area of multimedia research. Applications of crowdsourcing range from the exploitation of unsolicited user contributions, such as using tags to aid image understanding, to utilizing crowdsourcing platforms and marketplaces to micro-outsource tasks such as semantic video annotation. Further, crowdsourcing offers a time- and resource-efficient method for collecting large volumes of input for system design or evaluation, making it possible to optimize multimedia systems more rapidly and to address human factors more effectively.
At present, crowdsourcing remains notoriously difficult to exploit effectively in multimedia settings, due to the high sensitivity of the users or workers to changes in the form and the parameterization of their activities. For example, on a crowdsourcing platform, workers are known to react differently depending on the way in which a multimedia annotation task is presented or explained and in the manner in which they are incentivized (e.g., compensation, appeal of the task). A thorough understanding of crowdsourcing for multimedia will be crucial in enabling the field to effectively address these challenges.
Scope
CrowdMM 2012 solicits novel contributions to multimedia research that make use of human intelligence, but also take advantage of human plurality. This workshop especially encourages contributions that propose solutions for the key challenges that face widespread adoption of crowdsourcing paradigms in the multimedia research community. These include: identification of optimal crowd members (e.g., user expertise, worker reliability), providing effective explanations (i.e., good task design), controlling noise and quality in the results, designing incentive structures that do not breed cheating, adversarial environments, gathering necessary background information about crowd members without violating privacy, controlling descriptions of task. Particular emphasis will be put on contributions that successfully combine human and automatic methods in order to address multimedia research challenges.
This workshop encourages theoretical, experimental, and/or methodological developments advancing state-of-the-art knowledge of crowdsourcing techniques for multimedia research. Topics include, but are not limited to the use of crowds, wisdom of crowds, or human computation in multimedia, in the following areas of research:
Creation: content synthesis, authoring, editing, and collaboration, summarization and storytelling
Evaluation: evaluation of multimedia signal processing algorithms, multimedia analysis and retrieval algorithms, or multimedia systems and applications
Retrieval: analysis of user multimedia queries, evaluating multimedia search algorithms and interactive multimedia retrieval
Annotation: generating semantic annotations for multimedia content, collecting large-scale input on user affective reactions
Human factors: designing or evaluating user interfaces for multimedia systems, usability study, multi-modal environment, human recognition and perceptions
Novel applications (e.g., human as an element in the loop of computation)
Effective Learning from crowd-annotated or crowd-augmented datasets
Quality assurance and cheat detection
Economics and incentive structures
Programming languages, tools and platforms providing enhanced support
Inherent biases, limitations and trade-offs of crowd-centered approaches

Last modified: 2012-04-16 23:45:47