ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

DDD 2016 - Workshop on Driver Drowsiness Detection from Video 2016

Date2016-11-21 - 2016-11-23

Deadline2016-08-15

VenueTaipei, Taiwan Taiwan

Keywords

Websitehttps://cv.cs.nthu.edu.tw/php/callforpap...

Topics/Call fo Papers

Recent reports have suggested that drowsy driving is one of the main factors in fatal motor vehicle crashes each year. In 2014, the US National Sleep Foundation (NSF) pledged an initiative that seeks to raise public awareness on drowsy driving and asked legislators to have law enforcement, regulations and recommendations on drowsy driving and distraction prevention. Therefore, developing active monitoring systems that help drivers avoid accidents in a timely manner is of utmost importance.
Most of the previous works on drowsy driver detection focus on using limited visual cues. However, human drowsiness is a complicated mechanism. It is a challenging problem to detect driver drowsiness accurately in a timely fashion. There is lack of a publicly available video dataset to evaluate and compare different drowsy driver detection systems. This workshop will provide a great opportunity for the researchers working on the related topics, such as video event recognition or facial expression recognition, or interested in this problem to join in this competition and compare their performance with each other.
Scope
The International Workshop on Drowsy Driver Detection from Video contains two tracks: Regular Paper Track and Challenge Paper Track.
Regular Paper Track: for papers related to driver drowsiness detection. The goal of this track is to identify the state-of-the-art algorithms, systems and frameworks that are particularly suitable for driver drowsiness detection.
Challenge Paper Track: for papers participating in the challenge session of driver drowsiness detection. The challenge is held based on a driver drowsiness video dataset collected by NTHU Computer Vision Lab. The detection results of each participant will be evaluated via the training/testing videos provided.

Last modified: 2016-06-05 14:43:48