ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

HCOMP 2013 - The 1st Conference on Human Computation & Crowdsourcing

Date2013-11-06 - 2013-11-09

Deadline2013-05-10

VenueCalifornia , USA - United States USA - United States

Keywords

Websitehttps://www.humancomputation.com/2013

Topics/Call fo Papers

The First AAAI Conference on Human Computation and Crowdsourcing (HCOMP-2013) will be held November 6-9, 2013 in Palm Springs, California, USA. The conference is aimed at promoting the scientific exchange of advances in human computation and crowdsourcing among researchers, engineers, and practitioners across a spectrum of disciplines. The conference was created by researchers from diverse fields to serve as a key focal point and scholarly venue for the review and presentation of the highest quality work on principles, studies, and applications of human computation. The meeting seeks and embraces work on human computation and crowdsourcing in multiple fields, including human-computer interaction, cognitive psychology, economics, information retrieval, economics, databases, systems, optimization, and multiple subdisciplines of artificial intelligence, such as vision, speech, robotics, machine learning, and planning.
Submissions are invited on efforts and developments on principles, experiments, and implementations of systems that rely on programmatic access to human intellect to perform some aspect of computation, or where human perception, knowledge, reasoning, or physical activity and coordination contributes to the operation of larger computational systems, applications, and services. The conference will include presentations of new research, works-in-progress and demo sessions, and invited talks. A day of workshops and tutorials will follow the main conference. Submissions to the main conference will be due on May 10, 2013. Authors will be notified about the acceptance and rejection of paper submissions on July 16, 2013. Accepted papers will be due in camera-ready form on September 4, 2013. A complete set of deadlines and notification dates for workshops, tutorials, works-in-progress and demonstrations can be found on the right.
HCOMP 2013 builds on a series of four successful earlier workshops (2009,2010,2011,2012). All full papers accepted at this first conference will be published as AAAI archival proceedings in the AAAI digital library. While we encourage visionary and forward-looking papers, please only submit your best novel work; the paper track will not accept work recently published or soon to be published in another conference or journal. However, to encourage exchange of ideas, such work can be submitted to the non-archival work-in-progress and demo track. For submissions of this kind, the authors should include the venue of previous or concurrent publication.
Several HCOMP workshops will take place on Saturday, November 9. HCOMP will also include a two-day CrowdCamp to turn crowdsourcing ideas into concrete prototypes, organized by Lydia Chilton (UW). CrowdCamp will bracket the HCOMP meeting with Day 1 on Wednesday, November 6 and Day 2 on Saturday, November 9. Information on CrowdCamp will be forthcoming.
Where
Renaissance Palm Springs Hotel
Palm Springs, California
When
November 6-9, 2013
Important Dates
All deadlines are 5pm Pacific time unless otherwise noted.
Papers
Submission deadline: May 10, 2013
Author rebuttal period: June 21-28
Notification: July 16, 2013
Camera Ready: September 4, 2013
Workshops & Tutorials
Proposal deadline: May 10, 2013
Notification: July 16, 2013
Camera Ready: October 15, 2013
Works-in-Progress & Demonstrations
Submission deadline: July 25, 2013
Notification: August 26, 2013
Camera Ready: September 4, 2013
Submissions
To submit your work, use the HCOMP 2013 Electronic Submission Site, (cmt.research.microsoft.com/HCOMP2013/).
Paper Format: AAAI, up to 8 pages. Paper submissions are blind - please anonymize your submissions.
Works-in-progress, demonstrations, workshops and tutorials: AAAI, up to 2 pages, plus supplemental material. These submissions do not have to be anonymized.
Follow
-AT-hcomp2013, FB Group.
Association for the Advancement of Artificial Intelligence
Conference Chairs
Björn Hartmann (UC Berkeley) Eric Horvitz (Microsoft Research)
Sponsorship Chair
Workshop and Tutorials Chair
Michael Kearns (UPenn) Aditya Parameswaran (Stanford)
Works in Progress and Demonstration Chair
Jeff Bigham (CMU)
Program Committee
Paul Bennett (Microsoft Research) Michael Bernstein (Stanford)
Jeff Bigham (University of Rochester) Yiling Chen (Harvard)
Ed Chi (Google) Lydia Chilton (University of Washington)
Janis Dickinson (Cornell) Michael Franklin (UC Berkeley)
Krzysztof Gajos (Harvard) Hector Garcia-Molina (Stanford)
Jeff Heer (University of Washington) Haym Hirsch (Rutgers)
Panagiotis Ipeirotis (NYU) Adam Kalai (Microsoft Research)
Ece Kamar (Microsoft Research) Henry Kautz (University of Rochester)
Andreas Krause (ETH Zurich) Edith Law (Harvard University)
Chris Lintott (Oxford) Greg Little (oDesk)
Mausam (University of Washington) Robert Miller (MIT)
Jeff Nichols (IBM) David Parkes (Harvard)
Adam Sadilek (University of Rochester) Arfon Smith (Adler Planetarium)
Siddharth Suri (Microsoft Research) Loren Terveen (University of Minnesota)
Adrien Treuille (CMU) Daniel Weld (University of Washington)
Haoqi Zhang (Northwestern)
Program
November 6: Opening Reception
November 7-8: HCOMP-13 Technical Program
November 6 and 9: CrowdCamp 2013
November 9: Tutorial and Workshops
Detailed program to follow.
Keynote Speakers
John Kleinberg Zoran Popovic
Jon Kleinberg (Cornell)
Steering the Crowd: Rewards and Incentives for Collective Behavior
Zoran Popovic (University of Washington)
Accepted Papers
On the Verification Complexity of Group Decision-Making Tasks
Ofra Amir (Harvard University, USA); Yuval Shahar, Ya’akov Gal, and Litan Ilany (Ben-Gurion University of the Negev, Israel)
Community Clustering: Leveraging an Academic Crowd to Form Coherent Conference Sessions
Paul André (Carnegie Mellon University, USA), Haoqi Zhang (Northwestern University, USA), Juho Kim (MIT CSAIL, USA), Lydia Chilton (University of Washington, USA), Steven P. Dow (Carnegie Mellon University, USA), and Robert C. Miller (MIT CSAIL, USA)
99designs: An Analysis of Creative Competition in Crowdsourced Design
Ricardo Matsumura Araujo (Federal University of Pelotas, Brazil)
Crowdsourcing Multi-Label Classification for Taxonomy Creation
Jonathan Bragg, Mausam, and Daniel S. Weld (University of Washington, USA)
Winner-Take-All Crowdsourcing Contests with Stochastic Production
Ruggiero Cavallo and Shaili Jain (Microsoft Research, USA)
Scalable Preference Aggregation in Social Networks
Swapnil Dhamal and Y. Narahari (Indian Institute of Science, India)
CASTLE: Crowd-Assisted System for Text Labeling and Extraction
Sean Goldberg and Daisy Zhe Wang (University of Florida, USA), Tim Kraska (Brown University, USA)
Depth-Workload Tradeoffs for Workforce Organization
Hoda Heidari and Michael Kearns (University of Pennsylvania, USA)
The Crowd-Median Algorithm
Hannes Heikinheimo (Rovio Entertainment Ltd, Finland) and Antti Ukkonen (Aalto University, Finland)
Interpretation of Crowdsourced Activities Using Provenance Network Analysis
T. D. Huynh (University of Southampton, UK), M. Ebden (University of Oxford, UK), M. Venanzi (University of Southampton, UK), S. Ramchurn (University of Southampton, UK), S. Roberts (University of Oxford, UK), L. Moreau (University of Southampton, UK)
Crowdsourcing a HIT: Measuring Workers’ Pre-Task Interactions on Microtask Markets
Jason T. Jacques and Per Ola Kristensson (University of St Andrews, UK)
Volunteering Versus Work for Pay: Incentives and Tradeoffs in Crowdsourcing
Andrew Mao (Harvard University, USA), Ece Kamar (Microsoft Research, USA), Yiling Chen (Harvard University, USA), Eric Horvitz (Microsoft Research, USA), Megan E. Schwamb (ASIAA, Taiwan & Yale Center for Astronomy & Astrophysics, USA), Chris J. Lintott (University of Oxford, UK & Adler Planetarium, USA), and Arfon M. Smith (Adler Planetarium, USA)
Why Stop Now? Predicting Worker Engagement in Online Crowdsourcing
Andrew Mao (Harvard University, USA); Ece Kamar and Eric Horvitz (Microsoft Research, USA)
DataSift: An Expressive and Accurate Crowd-Powered Search Toolkit
Aditya Parameswaran, Ming Han Teh, Hector Garcia-Molina, Jennifer Widom (Stanford University, USA)
CrowdBand: An Automated Crowdsourcing Sound Composition System
Mary Pietrowicz, Danish Chopra, Amin Sadeghi, Puneet Chandra, Brian P. Bailey, and Karrie Karahalios (University of Illinois, USA)
What Will Others Choose? How a Majority Vote Reward Scheme Can Improve Human Computation in a Spatial Location Identification Task
Huaming Rao (Nanjing University of Science & Technology, China & University of Illinois at Urbana-Champaign, USA); Shih-Wen Huang and Wai-Tat Fu (University of Illinois at Urbana-Champaign, USA)
nEmesis: Which Restaurants Should You Avoid Today?
Adam Sadilek (Google, USA); Sean Brennan, Henry Kautz, and Vincent Silenzio (University of Rochester, USA)
Ability Grouping of Crowd Workers via Reward Discrimination
Yuko Sakurai (Kyushu University and JST PRESTO, Japan), Tenda Okimoto (Transdisciplinary Research Integration Center, Japan), Masaaki Oka (Kyushu University, Japan), Masato Shinoda (Nara Women’s University, Japan), and Makoto Yokoo (Kyushu University, Japan)
SQUARE: A Benchmark for Research on Computing Crowd Consensus
Aashish Sheshadri and Matthew Lease (The University of Texas at Austin, USA)
Incentives for Privacy Tradeoff in Community Sensing
Adish Singla and Andreas Krause (ETH Zurich, Switzerland)
Leveraging Collaboration: A Methodology for the Design of Social Problem-Solving Systems
Lucas M. Tabajara, Marcelo O.R. Prates, Diego V. Noble, and Luis C. Lamb (Federal University of Rio Grande do Sul, Brazil)
Crowdsourcing Spatial Phenomena Using Trust-Based Heteroskedastic Gaussian Processes
Matteo Venanzi, Alex Rogers, and Nicholas R. Jennings (University of Southampton, UK)
Dwelling on the Negative: Incentivizing Effort in Peer Prediction
Jens Witkowski (Albert-Ludwigs-Universität, Germany), Yoram Bachrach (Microsoft Research, UK), Peter Key (Microsoft Research, UK), and David C. Parkes (Harvard University, USA)
Improving Your Chances: Boosting Citizen Science Discovery
Yexiang Xue, Bistra Dilkina, Theodoros Damoulas, Daniel Fink, Carla P. Gomes, and Steve Kelling (Cornell University, USA)
Inferring Users’ Preferences from Crowdsourced Pairwise Comparisons: A Matrix Completion Approach
Jinfeng Yi and Rong Jin (Michigan State University, USA), Shaili Jain (Microsoft, USA), and Anil K. Jain (Michigan State University, USA)
Accepted Demonstrations
Wanted: More Nails for the Hammer, An Investigation into the Application of Human Computation
Elizabeth Brem, Tyler Bick, Andrew Schriner, Daniel Oerther
Frenzy: A Platform for Friendsourcing
Lydia B. Chilton, Felicia Cordeiro, Daniel S. Weld, James A. Landay
The Work Exchange: Peer-to-Peer Enterprise Crowdsourcing
Stephen Dill, Robert Kern, Erika Flint, Melissa Cefkin
In-HIT Example-Guided Annotation Aid for Crowdsourcing UI Components
Yi-Ching Huang, Chun-I Wang, Shih-Yuan Yu, Jane Yung-jen Hsu
Crowdsourcing Objective Answers to Subjective Questions Online
Ravi Iyer
Automating Crowdsourcing Tasks in an Industrial Environment
Vasilis Kandylas, Omar Alonso, Shiroy Choksey, Kedar Rudre, Prashant Jaiswal
Cobi: Community-Informed Conference Scheduling
Juho Kim, Haoqi Zhang, Paul André, Lydia B. Chilton, Anant Bhardwaj, David Karger, Steven P. Dow, Robert C. Miller
Curio: A Platform for Supporting Mixed-Expertise Crowdsourcing
Edith Law, Conner Dalton, Nick Merrill, Albert Young, Krzysztof Z. Gajos
Real-Time Drawing Assistance through Crowdsourcing
Alex Limpaecher, Nicolas Feltman, Adrien Treuille, Michael Cohen
An Introduction to the Zooniverse
Arfon M. Smith, Stuart Lynn, Chris J. Lintott
Accepted Works-in-Progress
A Human-Centered Framework for Ensuring Reliability on Crowdsourced Labeling Tasks
Omar Alonso, Catherine C. Marshall, Marc Najork
Making Crowdwork Work: Issues in Crowdsourcing for Organizations
Obinna Anya, Melissa Cefkin, Steve Dill, Robert Moore, Susan Stucky, Osarieme Omokaro
Using Crowdsourcing to Generate an Evaluation Dataset for Name Matching Technologies
Alya Asarina, Olga Simek
Statistical Quality Estimation for General Crowdsourcing Tasks
Yukino Baba, Hisashi Kashima
Crowdsourcing Translation by Leveraging Tournament Selection and Lattice-Based String Alignment
Julien Bourdaillet, Shourya Roy, Gueyoung Jung, Yu-An Sun
Lottery-Based Payment Mechanism for Microtasks
L. Elisa Celis, Shourya Roy, Vivek Mishra
OnDroad Planner: Building Tourist Plans Using Traveling Social Network Information
Isabel Cenamor, Tomás de la Rosa, Daniel Borrajo
Using Human and Machine Processing in Recommendation Systems
Eric Colson
LabelBoost: An Ensemble Model for Ground Truth Inference Using Boosted Trees
Siamak Faridani, Georg Buscher
A Ground Truth Inference Model for Ordinal Crowd-Sourced Labels Using Hard Assignment Expectation Maximization
Siamak Faridani, Georg Buscher, Ya Xu
Frequency and Duration of Self-Initiated Task-Switching in an Online Investigation of Interrupted Performance
Sandy J. J. Gould, Anna L. Cox, Duncan P. Brumby
Assessing the Viability of Online Interruption Studies
Sandy J. J. Gould, Anna L. Cox, Duncan P. Brumby, Sarah Wiseman
Human Stigmergy in Augmented Environments
Kshanti Greene, Thomas Young
Reducing Error in Context-Sensitive Crowdsourced Tasks
Daniel Haas, Matthew Greenstein, Kainar Kamalov, Adam Marcus, Marek Olszewski, Marc Piette
Transcribing and Annotating Speech Corpora for Speech Recognition: A Three-Step Crowdsourcing Approach with Quality Control
Annika Hämäläinen, Fernando Pinto Moreira, Jairo Avelar, Daniela Braga, Miguel Sales Dias
An Initial Study of Automatic Curb Ramp Detection with Crowdsourced Verification Using Google Street View Images
Kotaro Hara, Jin Sun, Jonah Chazan, David Jacobs, Jon E. Froehlich
Effect of Task Presentation on the Performance of Crowd Workers ? A Cognitive Study
Harini Sampath, Rajeev Rajeshuni, Bipin Indurkhya, Saraschandra Karanam, Koustuv Dasgupta
English to Hindi Translation Protocols for an Enterprise Crowd
Srinivasan Iyengar, Shirish Karande, Sachin Lodha
Joint Crowdsourcing of Multiple Tasks
Andrey Kolobov, Mausam, Daniel S. Weld
GameLab: A Tool Suit to Support Designers of Systems with Homo Ludens in the Loop
Markus Krause
Automated Support for Collective Memory of Conversational Interactions
Walter S. Lasecki, Jeffrey P. Bigham
Using Visibility to Control Collective Attention in Crowdsourcing
Kristina Lerman, Tad Hogg
Manipulating Social Roles in a Tagging Environment
Mieke H.R. Leyssen, Jacco van Ossenbruggen, Arjen P. de Vries, Lynda Hardman
Towards a Language for Non-Expert Specification of POMDPs for Crowdsourcing
Christopher H. Lin, Mausam, Daniel S. Weld
Two Methods for Measuring Question Difficulty and Discrimination in Incomplete Crowdsourced Data
Sarah K. K. Luger, Jeff Bowles
Crowd, the Teaching Assistant: Educational Assessment Crowdsourcing
Pallavi Manohar, Shourya Roy
Crowdsourcing Quality Control for Item Ordering Tasks
Toshiko Matsui, Yukino Baba, Toshihiro Kamishima, Hisashi Kashima
Ontology Quality Assurance with the Crowd
Jonathan M. Mortensen, Mark A. Musen, Natalya F. Noy
Personalized Human Computation
Peter Organisciak, Jaime Teevan, Susan Dumais, Robert C. Miller, Adam Tauman Kalai
EM-Based Inference of True Labels Using Confidence Judgments
Satoshi Oyama, Yuko Sakurai, Yukino Baba, Hisashi Kashima
Task Redundancy Strategy Based on Volunteers’ Credibility for Volunteer Thinking Projects
Lesandro Ponciano, Francisco Brasileiro, Guilherme Gadelha
Inserting Micro-Breaks into Crowdsourcing Workflows
Jeffrey M. Rzeszotarski, Ed Chi, Praveen Paritosh, Peng Dai
HiveMind: Tuning Crowd Response with a Single Value
Preetjot Singh, Walter S. Lasecki, Paulo Barelli, Jeffrey P. Bigham
Aggregating Human-Expert Opinions for Multi-Label Classification
Evgueni Smirnov, Hua Zhang, Ralf Peeters, Nikolay Nikolaev, Maike Imkamp
Herding the Crowd: Automated Planning for Crowdsourced Planning
Kartik Talamadupula, Subbarao Kambhampati, Yuheng Hu, Tuan Nguyen, Hankz Hankui Zhuo
Designing a Crowdsourcing Tool to Analyze Relationships among Jazz Musicians: The Case of Linked Jazz 52nd Street
Hilary K. Thorsen, M. Cristina Pattuelli
A Framework for Adaptive Crowd Query Processing
Beth Trushkowsky, Tim Kraska, Michael J. Franklin
Understanding Potential Microtask Workers for Paid Crowdsourcing
Ming-Hung Wang, Kuan-Ta Chen, Shuo-Yang Wang, Chin-Laung Lei
Boosting OCR Accuracy Using Crowdsourcing
Shuo-Yang Wang, Ming-Hung Wang, Kuan-Ta Chen
TrailView: Combining Gamification and Social Network Voting Mechanisms for Useful Data Collection
Michael Weingert, Kate Larson
Task Sequence Design: Evidence on Price and Difficulty
Ming Yin, Yiling Chen, Yu-An Sun

Last modified: 2013-10-09 22:50:59