ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

BEA 2013 - The 8th Workshop on the Innovative Use of NLP for Building Educational Applications

Date2013-06-13 - 2013-06-14

Deadline2013-03-11

VenueGeorgia, USA - United States USA - United States

Keywords

Websitehttps://www.cs.rochester.edu/~tetreaul/n...

Topics/Call fo Papers

Research in NLP applications for education continues to progress using innovative NLP techniques - statistical, rule-based, or most commonly, a combination of the two. New technologies have made it possible to include speech in both assessment and Intelligent Tutoring Systems (ITS). NLP techniques are also being used to generate assessments, and tools for curriculum development of reading materials, as well as tools to support assessment and test development. As a community, we continue to improve existing capabilities and to identify and generate innovative and creative ways to use NLP in applications for writing, reading, speaking, critical thinking, and assessment.
In 2012, the use of NLP in educational contexts took two major steps forward. First, outside of the computational linguistics community, the Hewlett Foundation reached out to both the public and private sectors and sponsored two competitions: one on automated essay scoring (Automated Student Assessment Prize: ASAP, Phase 1), and a second on short-answer scoring (Phase 2). The motivation driving these competitions was to engage the larger scientific community in this enterprise. The two competitions were inspired by the Common Core State Initiative, an influential set of standards adopted by 45 states in the U.S. The Initiative describes what K-12 students should be learning with regard to Reading, Writing, Speaking, Listening, and Media and Technology. Another breakthrough for educational applications within the computational linguistics community was the second edition of the “Helping Our Own” grammatical error detection/correction competition at last year’s BEA workshop ? where 14 systems competed. In 2013, independent of the BEA workshop, there will be two shared task competitions. This year’s CoNLL Shared Task is on grammatical error correction and there is a SemEval Shared Task on Student Response Analysis. Both of these competitions will increase the visibility of the educational problem space in the NLP community.
In this year’s BEA workshop, we are soliciting papers across a broad range of educational applications, including: intelligent tutoring, learner cognition, use of corpora, grammatical error detection, tools for teachers and test developers, and automated scoring and evaluation of open-ended responses. Since the first workshop in 1997, “Innovative Use of NLP in Building Educational Applications” has continued to bring together all NLP subfields to foster interaction and collaboration among researchers in both academic institutions and industry. The workshop offers a venue for researchers to present and discuss their work in these areas. Each year, we see steady growth in workshop submissions and attendance, and the research has become more innovative and advanced. In 2013, we expect that the workshop (consistent with previous workshops at ACL 1997, NAACL/HLT 2003, ACL 2005, ACL 2008, NAACL HLT 2009, NAACL HLT 2010, ACL 2011, NAACL HLT 2012), will continue to expose the NLP research community to technologies that identify novel opportunities for the use of NLP techniques and tools in educational applications. At NAACL HLT 2012, the workshop coordinated with the HOO shared task for grammatical error detection, generating a much larger poster session that was lively and well-attended. In 2013, the workshop will host a Native Language Identification Shared Task.
The workshop will solicit both full papers and short papers for either oral or poster presentation. Given the broad scope of the workshop, we organize the workshop around three central themes in the educational infrastructure:
Development of curriculum and assessment (e.g., applications that help teachers develop reading materials)
Delivery of curriculum and assessments (e.g., applications where the student receives instruction and interacts with the system);
Reporting of assessment outcomes (e.g., automated scoring of open-ended responses)
Topics will include, but will not be limited to, the following:
Automated scoring/evaluation for oral and written student responses
Content analysis for scoring/assessment
Analysis of the structure of argumentation
Grammatical error detection and correction
Discourse and stylistic analysis
Plagiarism detection
Machine translation for assessment, instruction and curriculum development
Detection of non-literal language (e.g., metaphor)
Sentiment analysis
Intelligent Tutoring (IT) that incorporates state-of-the-art NLP methods
Dialogue systems in education
Hypothesis formation and testing
Multi-modal communication between students and computers
Generation of tutorial responses
Knowledge representation in learning systems
Concept visualization in learning systems
Learner cognition
Assessment of learners' language and cognitive skill levels
Systems that detect and adapt to learners' cognitive or emotional states
Tools for learners with special needs
Use of corpora in educational tools
Data mining of learner and other corpora for tool building
Annotation standards and schemas / annotator agreement
Tools and applications for classroom teachers and/or test developers
NLP tools for second and foreign language learners
Semantic-based access to instructional materials to identify appropriate texts
Tools that automatically generate test questions
Processing of and access to lecture materials across topics and genres
Adaptation of instructional text to individual learners’ grade levels
Tools for text-based curriculum development
E-learning tools for personalized course content
Language-based educational games
Issues concerning the evaluation of NLP-based educational tools
Descriptions of implemented systems
Descriptions and proposals for shared tasks
NLI-2013 Shared Task
We are pleased to host the first edition of a shared task on Native Language Identification (NLI). The shared task will be organized by Joel Tetreault, Aoife Cahill and Daniel Blanchard. NLI is the task of identifying the native language (L1) of a writer based solely on a sample of their writing. The task is typically framed as a classification problem where the set of L1s is known a priori. Most work has focused on identifying the native language of writers learning English as a second language. To date this topic has motivated several ACL and EMNLP papers, as well as a master’s thesis.
Native Language Identification (NLI) can be useful for a number of applications. In educational settings, NLI can be used to provide more targeted feedback to language learners about their errors. It is well known that learners of different languages make different errors depending on their L1s. A writing tutor system which can detect the native language of the learner will be able to tailor the feedback about the error and contrast it with common properties of the learner’s language. In addition, native language is often used as a feature that goes into authorship profiling, which is frequently used in forensic linguistics. Details on the shared task can be found on the website: https://sites.google.com/site/nlisharedtask2013/ho....
Submission Information
We will be using the NAACL-HLT 2013 Submission Guidelines for the BEA-8 Workshop this year. Authors are invited to submit a full paper of up to 8 pages in electronic, PDF format, with up to 2 additional pages for references. We also invite short papers of up to 4 pages, including 2 additional pages for references. Papers which describe systems are also invited to give a demo of their system. If you would like to present a demo in addition to presenting the paper, please make sure to select either "full paper + demo" or "short paper + demo" under "Submission Category" in the START submission page.
Previously published papers cannot be accepted. The submissions will be reviewed by the program committee. As reviewing will be blind, please ensure that papers are anonymous. Self-references that reveal the author's identity, e.g., "We previously showed (Smith, 1991) ...", should be avoided. Instead, use citations such as "Smith previously showed (Smith, 1991) ...".
Please use the 2013 NAACL-HLT style sheet for composing your paper: http://naacl2013.naacl.org/CFP.aspx (see Format section for style files).
Please note that we are using the same submission page for both the BEA-8 and the NLI-2013 Shared Task. When submitting to the NLI-2013, please select "NLI-2013" under "Submission Category."
Important Dates
Submission Deadline: March 11 (tentative)
Notification of Acceptance: March 29
Camera-ready papers Due: May 04
Workshop: June 13 or 14
Program Committee
Andrea Abel, EURAC, Italy
Sumit Basu, Microsoft Research, USA
Lee Becker, Avaya Labs, USA
Beata Beigman Klebanov, Educational Testing Service, USA
Delphine Bernhard, Université de Strasbourg, France
Jared Bernstein, Pearson, USA
Kristy Boyer, North Carolina State University, USA
Chris Brew, Educational Testing Service, USA
Ted Briscoe, University of Cambridge, UK
Chris Brockett, MSR, USA
Aoife Cahill, Educational Testing Service, USA
Martin Chodorow, Hunter College, CUNY, USA
Mark Core, USC Institute for Creative Technologies, USA
Daniel Dahlmeier, National University of Singapore, Singapore
Markus Dickinson, Indiana University, USA
Bill Dolan, Microsoft, USA
Myrosia Dzikovska, University of Edinburgh, UK
Keelan Evanini, Educational Testing Service, USA
Michael Flor, Educational Testing Service, USA
Peter Foltz, Pearson Knowledge Technologies, USA
Jennifer Foster, Dublin City University, Ireland
Horacio Franco, SRI, USA
Michael Gamon, Microsoft, USA
Caroline Gasperin, SwiftKey, UK
Kallirroi Georgila, USC Institute for Creative Technologies, USA
Iryna Gurevych, University of Darmstadt, Germany
Kadri Hacioglu, Rossetta Stone, USA
Na-Rae Han, University of Pittsburgh, USA
Trude Heift, Simon Frasier University, Canada
Michael Heilman, Educational Testing Service, USA
Derrick Higgins, Educational Testing Service, USA
Ross Israel, Indiana University, USA
Heng Ji, Queens College, USA
Pamela Jordan, University of Pittsburgh, USA
Ola Knutsson, KTH Nada, Sweden
John Lee, City University of Hong Kong, China
Jackson Liscombe, Nuance Communications, USA
Diane Litman, University of Pittsburgh, USA
Annie Louis, University of Pennsylvania, USA
Xiaofei Lu, Penn State University, USA
Nitin Madnani, Educational Testing Service, USA
Montse Maritxalar, University of the Basque Country, Spain
James Martin, University of Colorado, USA
Aurélien Max, LIMSI-CNRS, France
Detmar Meurers, University of Tübingen, Germany
Lisa Michaud, Merrimack College, USA
Michael Mohler, University of North Texas
Smaranda Muresan, Rutgers University, USA
Ani Nenkova, University of Pennsylvania, USA
Hwee Tou Ng, National University of Singapore, Singapore
Rodney Nielsen, University of North Texas, USA
Ted Pedersen, University of Minnesota, USA
Bryan Pellom, Rossetta Stone, USA
Patti Price, PPRICE Speech and Language Technology, USA
Andrew Rosenberg, Queens College, CUNY, USA
Mihai Rotaru, TextKernel, The Netherlands
Dan Roth, UIUC, USA
Alla Rozovskaya, UIUC, USA
Izhak Shafran, Oregon Health & Science University, USA
Serge Sharoff, University of Leeds, UK
Richard Sproat, Google, USA
Svetlana Stenchikova, Columbia University, USA
Helmer Strik, Radboud University Nijmegen, The Netherlands
Joseph Tepperman, Rosetta Stone, USA
Nai-Lung Tsao, National Central University, Taiwan
Monica Ward, Dublin City University, Ireland
Pete Whitelock, Oxford University Press, UK
David Wible, National Central University, Taiwan
Peter Wood, University of Saskatchewan in Saskatoon, Canada
Klaus Zechner, Educational Testing Service, USA
Torsten Zesch, University of Darmstadt, Germany

Last modified: 2012-12-29 21:00:10