ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

MAHJ 2012 - Symposium on Machine Aggregation of Human Judgment

Date2012-11-02

Deadline2012-05-25

VenueArlington, USA - United States USA - United States

Keywords

Websitehttps://www.aaai.org/Symposia/Fall/fss12...

Topics/Call fo Papers

The AAAI 2012 Fall Symposium on Machine Aggregation of Human Judgment focuses on combining human and machine inference. For unique events and data-poor problems, there is no substitute for human judgment. Even for data-rich problems, human input is needed to account for contextual factors. For example, textual analysis is data rich, but context and semantics often make automated parsing unusable. However, humans are notorious for underestimating the uncertainty in their forecasts and even the most expert judgments exhibit well-known cognitive biases. The challenge is therefore to aggregate expert judgment such that it compensates for the human deficiencies.
There are fundamental theoretical reasons to expect aggregated estimates to outperform individual forecasts. These theoretical results are borne out by a robust empirical literature demonstrating the superiority of opinion pools and prediction markets over individual forecasts, and of ensemble forecasts over those of top models. While weighted forecasts are theoretically optimal, among human experts unweighted forecasts have been hard to beat.
This symposium focuses on methods with the potential to come closer to the theoretical optimum. While a number of methods have shown promise individually, there is potential for significant advancement from combining them into structured, efficient, repeatable elicitation and aggregation protocols. Benefits of improved aggregation methods include substantial increases in the quality and reliability of expert judgments, removing misunderstanding, illuminating context dependence of forecasts, and reducing overconfidence and motivational bias in forecasts. On the other hand, there's some skepticism that statistical models can outperform experts most of the time. Machine reasoning lacks the context to know when the models no longer apply, or in cases like natural language, simply lack sufficient context to be reliable in open-world or novel problems. This symposium considers powerful hybrid techniques using humans to help aggregate computer models.
A broad range of researchers in the AI community and other application fields such as econometrics, sociology, political science, and intelligence analysis will find this symposium interesting and useful. Bringing these disciplines together to the venue also greatly facilitates the research endeavors.
Topics
Topics include but are not limited to the following:
Reasoning under uncertainty
Ensembles and aggregation
Information fusion
Crowdsourcing techniques and applications
Information elicitation and presentation
Performance evaluation: scalability and accuracy
Prediction markets
Collective intelligence

Last modified: 2012-04-28 18:50:51