EvalUMAP 2016 - 2016 Workshop on Towards comparative evaluation in user modeling, adaptation and personalization
Topics/Call fo Papers
Research in the areas of User Modelling, Adaptation and Personalization faces a number of significant scientific challenges. One of the most significant of these challenges is the issue of comparative evaluation. It has always been difficult to rigorously compare different approaches to personalization, as the function of the resulting systems is, by their nature, heavily influenced by the behaviour of the users involved in trialling the systems. To-date this topic has received relatively little attention. Developing comparative evaluations in this space would be a huge advancement as it would enable shared comparison across research, which to-date has been very limited.
Taking inspiration from communities such as Information Retrieval and Machine Translation, the first EvalUMAP Workshop seeks to propose and design one or more shared tasks to support the comparative evaluation of approaches to User Modelling, Adaptation and Personalization. The workshop will solicit presentations from key practitioners in the field on innovative approaches to evaluating such systems and will provide a forum to start scoping and designing tasks for the following year. The resulting shared task(s) will be accompanied by appropriate models, content, metadata, user behaviours, etc., and can be used to comprehensively compare how different approaches and systems perform. In addition, a number of metrics and observations will be outlined, that participants would be expected to perform in order to facilitate comparison.
The planned outcome of the EvalUMAP Workshop 2016 will be a roadmap to develop initial shared task(s) that will be published well in advance of UMAP 2017, giving an opportunity for participants to test and tune their systems and complete the task in order for comparative results and associated publications to be prepared for and presented at the EvalUMAP Workshop in 2017. We envision that EvalUMAP 2017 will be the starting point for an annual comparative evaluation challenge at future UMAP conferences.
Workshop topics are evaluation focused and include, but are not limited to:
Understanding UMAP evaluation
Defining tasks and scenarios for evaluation purposes
Identification of potential corpora for shared tasks
Interesting target tasks and explanations of their importance
Critiques or comparisons of existing evaluation metrics and methods
How we can combine existing evaluation metrics and methods
Improving on previously suggested metrics and methods
Reducing the cost of evaluation
Proposal of new evaluation metrics and methods
Technical challenges associated with design and implementation
Privacy, Ethics and security issues
Legal and ethical issues
Taking inspiration from communities such as Information Retrieval and Machine Translation, the first EvalUMAP Workshop seeks to propose and design one or more shared tasks to support the comparative evaluation of approaches to User Modelling, Adaptation and Personalization. The workshop will solicit presentations from key practitioners in the field on innovative approaches to evaluating such systems and will provide a forum to start scoping and designing tasks for the following year. The resulting shared task(s) will be accompanied by appropriate models, content, metadata, user behaviours, etc., and can be used to comprehensively compare how different approaches and systems perform. In addition, a number of metrics and observations will be outlined, that participants would be expected to perform in order to facilitate comparison.
The planned outcome of the EvalUMAP Workshop 2016 will be a roadmap to develop initial shared task(s) that will be published well in advance of UMAP 2017, giving an opportunity for participants to test and tune their systems and complete the task in order for comparative results and associated publications to be prepared for and presented at the EvalUMAP Workshop in 2017. We envision that EvalUMAP 2017 will be the starting point for an annual comparative evaluation challenge at future UMAP conferences.
Workshop topics are evaluation focused and include, but are not limited to:
Understanding UMAP evaluation
Defining tasks and scenarios for evaluation purposes
Identification of potential corpora for shared tasks
Interesting target tasks and explanations of their importance
Critiques or comparisons of existing evaluation metrics and methods
How we can combine existing evaluation metrics and methods
Improving on previously suggested metrics and methods
Reducing the cost of evaluation
Proposal of new evaluation metrics and methods
Technical challenges associated with design and implementation
Privacy, Ethics and security issues
Legal and ethical issues
Other CFPs
- THE FUTURE OF PERSONAL DATA: ENVISIONING NEW PERSONALIZED SERVICES ENABLED BY QUANTIFIED SELF TECHNOLOGIES
- 6th International Workshop on Personalization Approaches in Learning Environments
- 2016 International Conference on Information Science & Library Science and Social Sciences
- 2016 IEEE Electrical Power and Energy Conference
- 14th International Conference on Service-Oriented Computing (ICSOC 2016)
Last modified: 2016-03-06 16:53:02