ChaLearn 2015 - Challenge and Workshop on Pose Recovery, Action Recognition, and Cultural Event Classification
Date2015-06-12
Deadline2015-03-09
VenueBoston, MA, USA - United States
Keywords
Websitehttps://gesture.chalearn.org
Topics/Call fo Papers
ChaLearn organizes in 2015 parallel challenge tracks on RGB data for Human Pose Recovery, action/interaction spotting, and cultural event classification. For each Track, the awards for the first, second and third winners will consist of 500, 300 and 200 dollars, respectively.
The challenge features three quantitative tracks:
Track 1: Human Pose Recovery: More than 8,000 frames of continuous RGB sequences are recorded and labeled with the objective of performing human pose recovery by means of recognizing more than 120,000 human limbs of different people. Examples of labeled frames are shown in Fig. 1.
Track 2: Action/Interaction Recognition: 235 performances of 11 action/interaction categories are recorded and manually labeled in continuous RGB sequences of different people performing natural isolated and collaborative actions randomly. Examples of labeled actions are shown in Fig. 1.
Track 3: Cultural event classification: More than 10,000 images corresponding to 50 different cultural event categories will be considered. In all the categories, garments, human poses, objects and context will be possible cues to be exploited for recognizing the events, while preserving the inherent inter- and intra-class variability of this type of images. Examples of cultural events will be Carnival, Oktoberfest, San Fermin, Maha-Kumbh-Mela and Aoi-Matsuri, among others, see Fig. 2. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for creating the baseline of this Track.
The challenge features three quantitative tracks:
Track 1: Human Pose Recovery: More than 8,000 frames of continuous RGB sequences are recorded and labeled with the objective of performing human pose recovery by means of recognizing more than 120,000 human limbs of different people. Examples of labeled frames are shown in Fig. 1.
Track 2: Action/Interaction Recognition: 235 performances of 11 action/interaction categories are recorded and manually labeled in continuous RGB sequences of different people performing natural isolated and collaborative actions randomly. Examples of labeled actions are shown in Fig. 1.
Track 3: Cultural event classification: More than 10,000 images corresponding to 50 different cultural event categories will be considered. In all the categories, garments, human poses, objects and context will be possible cues to be exploited for recognizing the events, while preserving the inherent inter- and intra-class variability of this type of images. Examples of cultural events will be Carnival, Oktoberfest, San Fermin, Maha-Kumbh-Mela and Aoi-Matsuri, among others, see Fig. 2. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for creating the baseline of this Track.
Other CFPs
- Large-scale Scene Understanding Challenge Workshop
- 2nd Joint Workshop on Multi-Sensor Fusion for Dynamic Scene Understanding
- Eleventh Embedded Vision Workshop (CVPR 2015)
- Computerised and Corpus-based Approaches to Phraseology: Monolingual and Multilingual Perspectives
- 2nd Workshop on Multi-word Units in Machine Translation and Translation Technology (MUMTTT 2015)
Last modified: 2015-01-18 22:16:28