ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

UMTE 2013 - The Workshop on UserCentric Machine Translation & Evaluation

Date2013-09-03

Deadline2013-06-15

VenueNice, France France

Keywords

Websitehttp://mtsummit2013.info/

Topics/Call fo Papers

With the uptake of MT in production workflows, the issues of usability and realistic assessment of MT quality have led to increased calls for user-centric approaches to evaluation, accessibility, and effective usage. Users and their feedback are vital at many key points in the MT process: ranging from sourcing and using data, developing systems, evaluating them and their output, through to end-user consumption.
Various technological approaches in human-computer interaction have made their way to the laboratories of MT research and development, where researchers have adopted metrics to assess aspects of cognitive effort, translation performance, user reception, and the effective deployment of cutting edge tools and cloud-based solutions. However, findings from user data and feedback have yet to demonstrate their growing value to MT itself, and are particularly important in areas of development and industrial application in a time when more and more users have access to MT technologies and their expectations of them are higher.
As users come in many forms, e.g. translators, post-editors, developers, and evaluators, the role of the tools, methods, and resources available to them is of critical importance, especially in the context of high-quality MT. Therefore, the quality of these resources is of significant importance and highlights issues surrounding the sourcing of appropriate high-quality parallel corpora, standardised quality ratings for resources, comparability of corpora and data, annotation, and evaluation. There is a growing need for appropriate tools and resources to support MT in this regard to go beyond the crude scores of one-size-fits-all standard automatic metrics and resource-heavy human evaluation. A top desideratum is to enhance the diagnostic value of MT evaluation in order to help developers fine-tune and optimise the performance of their systems and to prepare the systems for actual usage in production environments.
This full-day workshop consists of an opening keynote speaker followed by a thematic series of presentations around the topics listed below. We then focus on the topic of Barriers to High-Quality Machine Translation, a central concern of the QTLaunchPad project. This forms the basis of an interactive panel discussion comprised of invited representatives from industry, the translation community, researchers, government users, and related projects and consortia.
The workshop will be a forum for researchers and practitioners to discuss the shared strengths and shortcomings of existing tools, methods, and resources for MT development and evaluation with a focus on the user. We can learn from one another’s successes and failures, and together focus on the shared barriers to high-quality MT and how we can mobilise ourselves to overcome them.
Topics of Interest:
User-centric measures and usability studies, e.g. methods of MT evaluation, cognitive effort, user experience and performance.
Crowd and cloud ventures, e.g. crowd-sourced data, cloud-based translation and evaluation.
Corpora for MT development and evaluation, e.g. availability, preprocessing, quality, interoperability, annotation, error corpora and test suites, self-learning, adaptive MT, feedback mechanisms.
Diagnostic MT evaluation - human in the loop, e.g. identification and auto-correction of recurrent errors, learning from human evaluation.
Fully and semi-automated error solutions, e.g. below threshold medium-quality translations, identifying and overcoming quality barriers.

Last modified: 2013-05-21 23:19:32