CROWDBENCH 2016 - Workshop on Benchmarks for Ubiquitous Crowdsourcing: Metrics, Methodologies, and Datasets
Topics/Call fo Papers
The primary goal of this workshop is to synthesize existing research work, in ubiquitous crowdsourcing and crowdsensing, for establishing guidelines and methodologies for the evaluation of crowd-based algorithms and systems. This goal will be achieved by bringing together researchers from the community to discuss and disseminate ideas for comparative analysis and evaluation on shared tasks and data sets. A variety of views has emerged on the evaluation of crowdsourcing, across research communities, but so far there has been little effort to clarify key differences and commonalities in a forum. The aim of this workshop is to provide such a forum; such that, it creates the time and involvement required to subject the different views to rigorous discussion. It is expected that the workshop would result in a set of short papers that will clearly argue the positions on the issue. These papers will serve as a base resource for consolidating research in the field and moving it forward. Further, it is expected that, the discussions at the workshop would provide basic specifications for metrics, benchmarks, and evaluation campaigns that can then be considered by the wider research community.
Scope:
We invite submission of short papers which identify and motivate comparative analysis and evaluation approaches for crowdsourcing. We encourage submissions identifying and clearly articulating problems in terms of evaluating crowdsourcing approaches or algorithms designed for improving the process of crowdsourcing. We welcome early work, and particularly encourage submission of position papers that provide possible directions towards improving the validity of evaluations and benchmarks. Topics include but are not limited to:
Domain or application specific datasets for the evaluation of crowdsourcing/crowdsensing techniques
Cross platform evaluation of crowdsourcing/crowdsensing algorithms
Generalized metrics for task aggregation methods in crowdsourcing/crowdsensing
Generalized metrics for task assignment techniques in crowdsourcing/crowdsensing
Online evaluation methods for task aggregation and task assignment
Simulation methodologies for testing crowdsourcing/crowdsensing algorithms
Agent-based modeling methods for using existing simulation tools
Benchmarking tools for comparing crowdsourcing/crowdsensing platforms or services
Mobile-based datasets for crowdsourcing/crowdsensing
Data sets with detailed spatio-temporal information for crowdsourcing/crowdsensing
Methodologies for using online collected data for offline evaluation
Scope:
We invite submission of short papers which identify and motivate comparative analysis and evaluation approaches for crowdsourcing. We encourage submissions identifying and clearly articulating problems in terms of evaluating crowdsourcing approaches or algorithms designed for improving the process of crowdsourcing. We welcome early work, and particularly encourage submission of position papers that provide possible directions towards improving the validity of evaluations and benchmarks. Topics include but are not limited to:
Domain or application specific datasets for the evaluation of crowdsourcing/crowdsensing techniques
Cross platform evaluation of crowdsourcing/crowdsensing algorithms
Generalized metrics for task aggregation methods in crowdsourcing/crowdsensing
Generalized metrics for task assignment techniques in crowdsourcing/crowdsensing
Online evaluation methods for task aggregation and task assignment
Simulation methodologies for testing crowdsourcing/crowdsensing algorithms
Agent-based modeling methods for using existing simulation tools
Benchmarking tools for comparing crowdsourcing/crowdsensing platforms or services
Mobile-based datasets for crowdsourcing/crowdsensing
Data sets with detailed spatio-temporal information for crowdsourcing/crowdsensing
Methodologies for using online collected data for offline evaluation
Other CFPs
- 5th Workshop on Context Systems Design, Evaluation and Optimization
- 7th IEEE Workshop on Pervasive Collaboration and Social Networking
- 13th International Workshop on Managing Communications and Services
- 3rd International Workshop on Crowd Assisted Sensing, Pervasive Systems and Communications
- 2nd Workshop on Sensing Systems and Applications using Wrist based Smart Devices
Last modified: 2015-07-28 23:29:04