ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

ASSESS 2014 - Workshop on Data Mining for Educational Assessment and Feedback (ASSESS 2014)

Date2014-08-24

Deadline2014-06-04

VenueNew York City, USA - United States USA - United States

Keywords

Websitehttps://aspiringminds.in/assess/2014

Topics/Call fo Papers

Assessing educational achievement and providing feedback to learners is crucial to emerging course work and labor market-making systems. Automating the assessment and feedback mechanisms on open-ended tasks (e.g., short-answer and essay questions) will allow both learning and recruitment to be opened up to a much larger set of people that currently miss out due to resource constraints. Although automated or semi-automated assessment/feedback remains a nascent field, there have been recent advances drawing on techniques from data mining, knowledge discovery, machine learning, and crowdsourcing for peer grading. Notwithstanding, the technical requirements for accuracy and expressiveness for the many purposes for assessment (some high-stakes and some low-stakes; some formative and some summative) are not well-defined.
In this workshop, we will bring together a diverse group of researchers and industry practitioners in data mining, machine learning, and psychology to (1) discuss current state of the art in automated assessment, (2) identify a vision for future research, and (3) lay out a vision for the steps required to build a good automated or peer-grading based assessment. The organizers hope that a unified framework for thinking about assessment and feedback emerges by the end of workshop.
Call for papers
We invite submission of papers describing innovative research on all aspects of assessment of educational achievement and providing feedback. Position papers and papers inducing discussion are strongly encouraged, but should base themselves on fresh insight and/or new data. Topics of interest include, but are not limited to:
Automated and semi-automated assessments of open-responses
Assessment by crowdsourcing and peer-grading
Automatic feedback generation
Automatic problem (item) generation/design, calibration, modeling
Assessment design, test blue-print, rubric design, validity and reliability
Test integrity, equity, proctoring and high-stake testing
Non-cognitive assessments in personality, behavior, motivation, etc.
New areas: gamification, simulation based assessments
Insights derived by large scale assessment data towards learning, recruitment, performance prediction, etc.
Papers submitted should be original work and different from papers that have been previously published or are under review in a journal or another conference/workshop.
Accepted papers shall be provided a speaking slot at the workshop.

Last modified: 2014-04-26 23:08:24