SUW 2013 - Scene Understanding Workshop
Topics/Call fo Papers
Scene understanding started with the goal of building machines that can see like humans to infer general principles and current situations from imagery, but it has become much broader than that. Applications such as image search engines, autonomous driving, computational photography, vision for graphics, human machine interaction, were unanticipated and other applications keep arising as scene understanding technology develops. As a core problem of high level computer vision, while it has enjoyed some great success in the past 50 years, a lot more is required to reach a complete understanding of visual scenes.
Because of larger, faster and cheaper computation power, significant amount of data, new discoveries in human vision, we have precious opportunities that were not available before, to solve the problem. While scene understanding is of great importance, it is also known to be notoriously difficult. To help make further progress in this field, we propose to start this yearly workshop with CVPR to invite everyone in the field to participate in discussions, and showcase their latest innovations/ideas.
We have several goals in mind to carry on this mission. First of all, we aim to provide a yearly summary of new progress in the field through a combination of keynote talks, posters and a panel discussion. We plan to compile a yearbook to summarize the new progress in the field by accepting 1-2 pages summary from active researchers in the field, both by invitation and accepting submissions. This workshop offers a great opportunity to encourage communication, discussion and collaboration among different groups from both academia and industry, and rethinking the path we took to approach this problem in the historical perspective. It provides a common playground for inspiring discussions and stimulating debates.
Specifically, the workshop will focus on the following aspects:
Scene Classification
Modeling and recognition of scene-object interactions
3D Spatial Understanding from Images
Physically Grounded Scene Interpretation
Large-scale Data-driven approach for recognition
Understanding scenes from depth images and videos
Dataset issues for scene understanding
Related topics in cognitive psychology/human perception
Novel image features, frameworks and setting for scene understanding
Because of larger, faster and cheaper computation power, significant amount of data, new discoveries in human vision, we have precious opportunities that were not available before, to solve the problem. While scene understanding is of great importance, it is also known to be notoriously difficult. To help make further progress in this field, we propose to start this yearly workshop with CVPR to invite everyone in the field to participate in discussions, and showcase their latest innovations/ideas.
We have several goals in mind to carry on this mission. First of all, we aim to provide a yearly summary of new progress in the field through a combination of keynote talks, posters and a panel discussion. We plan to compile a yearbook to summarize the new progress in the field by accepting 1-2 pages summary from active researchers in the field, both by invitation and accepting submissions. This workshop offers a great opportunity to encourage communication, discussion and collaboration among different groups from both academia and industry, and rethinking the path we took to approach this problem in the historical perspective. It provides a common playground for inspiring discussions and stimulating debates.
Specifically, the workshop will focus on the following aspects:
Scene Classification
Modeling and recognition of scene-object interactions
3D Spatial Understanding from Images
Physically Grounded Scene Interpretation
Large-scale Data-driven approach for recognition
Understanding scenes from depth images and videos
Dataset issues for scene understanding
Related topics in cognitive psychology/human perception
Novel image features, frameworks and setting for scene understanding
Other CFPs
Last modified: 2013-03-05 07:37:57