FFMI 2014 - Workshop on Feedback from Multimodal Interactions in Learning Management Systems
Topics/Call fo Papers
Virtually all learning management systems and tutoring systems provide feedback to learners based on their time spent within the system, the number, intensity and type of tasks worked on and past performance with these tasks and corresponding skills. Some systems even use this information to steer the learning process by interventions such as recommending specific next tasks to work on, providing hints etc. Often the analysis of learner / system interactions is limited to these high-level interactions, and does not make good use of all the information available in much richer interaction types such speech and video. In the workshop Feedback from Multimodal Interactions in Learning Management Systems (FFMI-AT-EDM’2014) we would like to bring together researchers and practitioners who are interested in developing data-driven feedback and intervention mechanisms based on rich, multimodal interactions of learners within learning management systems, and among learners providing mutual advice and help. We aim at discussing all stages of the process, starting from preprocessing raw sensor data, automatic recognition of affective states to learning to identify salient features in these interactions that provide useful cues to steer feedback and intervention strategies and leading to adaptive and personalized learning management systems.
Full description:
Several techniques have been applied to mine data in Learning Management Systems. The immediate scopes are performance prediction, emotion recognition, speech analysis and many others. Less effort has been dedicated to answer the question how this analysis, especially of multimodal data, could be integrated to ameliorate the experience within Learning Management Systems for adaptive and personalized feedbacks, which are one of the mostly used intervention strategy.
A good feedback design should answer four main questions: when, what, how and why. It is crucial to know when to display the feedback, i.e., not too early or too late, and because the student really needs help (why). Moreover, one has to decide what the feedback should contain and in which format this content should be presented, e.g., in audio or in visual format (how). These questions were answered at the beginning with fixed rule- and content-based strategies. Nowadays, the main research focuses are to solve them through Educational Data Mining and to transfer the problem from structured to unstructured learning environments.
The available data for this kind of task changed over the last 10 years because of several reasons. Log files are automatically registered, without student’s awareness. This information can go from fine grained event detection of mouse movements and clicks, to coarse score records. Thanks to the increasing availability and speed of internet connection this data can be collected in a central storage for later being analyzed. Moreover, cheaper sensors, to be found commonly integrated in laptops, tablets, and cell phones, allow a multi-modal data collection that once was possible only in laboratory with invasive settings. To give an idea of the richness of the available information we can list: web and depth cams for video recordings of facial behavior, gestures and eye-tracking, as well as certain physiological parameters, such as heart- and respiration rate, facial temperature, microphones for speech and other sound recordings, as well wearable inertial devices for movement detection. Also other biometrics sensors can be purchased: commercial easy-on Electroencephalogram (EEG), electrodermal activity, such as skin conductance (EDA), skin temperature, etc.
The main scope of the workshop will be to bring together researchers of the Educational Data Mining community to exchange information about the current state of the art in personalized and adaptive feedback and interventions strategies, as well as about their possible multimodal data-driven extension in Learning Management Systems. Particular focus will be given to multimodal interaction, as novel source of information.
Workshop content and themes:
Data mining and pattern recognition methods to assess affective states, such as emotions, or interpret gestures etc. in multimodal interactions, in the context of learning management systems, tutoring systems etc., esp. in speech, sound, video, haptics, eye tracking, and instrumented environments.
Using information about affective states to provide feedback to learners.
Using information about affective states to moderate interventions in tutoring systems such as recommending a next task to work on, providing hints etc.
Preprocessing raw sensor data such as speech, video etc. for data mining of multimodal interaction data.
Representation of multimodal interactions in learning management systems.
Ethical aspects of mining multimodal interaction data.
Trust and reputation models for feedback analysis.
Full description:
Several techniques have been applied to mine data in Learning Management Systems. The immediate scopes are performance prediction, emotion recognition, speech analysis and many others. Less effort has been dedicated to answer the question how this analysis, especially of multimodal data, could be integrated to ameliorate the experience within Learning Management Systems for adaptive and personalized feedbacks, which are one of the mostly used intervention strategy.
A good feedback design should answer four main questions: when, what, how and why. It is crucial to know when to display the feedback, i.e., not too early or too late, and because the student really needs help (why). Moreover, one has to decide what the feedback should contain and in which format this content should be presented, e.g., in audio or in visual format (how). These questions were answered at the beginning with fixed rule- and content-based strategies. Nowadays, the main research focuses are to solve them through Educational Data Mining and to transfer the problem from structured to unstructured learning environments.
The available data for this kind of task changed over the last 10 years because of several reasons. Log files are automatically registered, without student’s awareness. This information can go from fine grained event detection of mouse movements and clicks, to coarse score records. Thanks to the increasing availability and speed of internet connection this data can be collected in a central storage for later being analyzed. Moreover, cheaper sensors, to be found commonly integrated in laptops, tablets, and cell phones, allow a multi-modal data collection that once was possible only in laboratory with invasive settings. To give an idea of the richness of the available information we can list: web and depth cams for video recordings of facial behavior, gestures and eye-tracking, as well as certain physiological parameters, such as heart- and respiration rate, facial temperature, microphones for speech and other sound recordings, as well wearable inertial devices for movement detection. Also other biometrics sensors can be purchased: commercial easy-on Electroencephalogram (EEG), electrodermal activity, such as skin conductance (EDA), skin temperature, etc.
The main scope of the workshop will be to bring together researchers of the Educational Data Mining community to exchange information about the current state of the art in personalized and adaptive feedback and interventions strategies, as well as about their possible multimodal data-driven extension in Learning Management Systems. Particular focus will be given to multimodal interaction, as novel source of information.
Workshop content and themes:
Data mining and pattern recognition methods to assess affective states, such as emotions, or interpret gestures etc. in multimodal interactions, in the context of learning management systems, tutoring systems etc., esp. in speech, sound, video, haptics, eye tracking, and instrumented environments.
Using information about affective states to provide feedback to learners.
Using information about affective states to moderate interventions in tutoring systems such as recommending a next task to work on, providing hints etc.
Preprocessing raw sensor data such as speech, video etc. for data mining of multimodal interaction data.
Representation of multimodal interactions in learning management systems.
Ethical aspects of mining multimodal interaction data.
Trust and reputation models for feedback analysis.
Other CFPs
- International Workshop on Non-Cognitive Factors & Personalization for Adaptive Learning
- The first International Workshop on: Graph-based Educational Datamining (G-EDM)
- International Journal of Innovative and Emerging Development (IJIED)
- Workshop on Knowledge-Powered Deep Learning for Text Mining (KPDLTM-2014)
- 13th European Conference on Research Methodology for Business and Management Studies
Last modified: 2014-03-04 23:14:48