ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

CROWDBENCH 2016 - Workshop on Benchmarks for Ubiquitous Crowdsourcing: Metrics, Methodologies, and Datasets

Date2016-03-14

Deadline2015-11-27

VenueSydney, Australia Australia

Keywords

Websitehttp://crowdbench.insight-centre.org

Topics/Call fo Papers

The primary goal of this workshop is to synthesize existing research work, in ubiquitous crowdsourcing and crowdsensing, for establishing guidelines and methodologies for the evaluation of crowd-based algorithms and systems. This goal will be achieved by bringing together researchers from the community to discuss and disseminate ideas for comparative analysis and evaluation on shared tasks and data sets. A variety of views has emerged on the evaluation of crowdsourcing, across research communities, but so far there has been little eff ort to clarify key di fferences and commonalities in a forum. The aim of this workshop is to provide such a forum; such that, it creates the time and involvement required to subject the di fferent views to rigorous discussion. It is expected that the workshop would result in a set of short papers that will clearly argue the positions on the issue. These papers will serve as a base resource for consolidating research in the field and moving it forward. Further, it is expected that, the discussions at the workshop would provide basic specifi cations for metrics, benchmarks, and evaluation campaigns that can then be considered by the wider research community.
Scope:
We invite submission of short papers which identify and motivate comparative analysis and evaluation approaches for crowdsourcing. We encourage submissions identifying and clearly articulating problems in terms of evaluating crowdsourcing approaches or algorithms designed for improving the process of crowdsourcing. We welcome early work, and particularly encourage submission of position papers that provide possible directions towards improving the validity of evaluations and benchmarks. Topics include but are not limited to:
Domain or application specifi c datasets for the evaluation of crowdsourcing/crowdsensing techniques
Cross platform evaluation of crowdsourcing/crowdsensing algorithms
Generalized metrics for task aggregation methods in crowdsourcing/crowdsensing
Generalized metrics for task assignment techniques in crowdsourcing/crowdsensing
Online evaluation methods for task aggregation and task assignment
Simulation methodologies for testing crowdsourcing/crowdsensing algorithms
Agent-based modeling methods for using existing simulation tools
Benchmarking tools for comparing crowdsourcing/crowdsensing platforms or services
Mobile-based datasets for crowdsourcing/crowdsensing
Data sets with detailed spatio-temporal information for crowdsourcing/crowdsensing
Methodologies for using online collected data for offline evaluation

Last modified: 2015-07-28 23:29:04