ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

3dRR 2011 - 3rd International IEEE Workshop on 3D Representation and Recognition (3dRR-11)

Date2011-11-07

Deadline2011-07-18

VenueBarcelona, Spain Spain

Keywords

Websitehttp://www.iccv2011.org/program/workshops

Topics/Call fo Papers

Object categorization and scene understanding have long been a central goal of computer vision research. Changes in lighting, viewpoint, and pose, as well as intra-class differences, lead to enormous appearance variation, making the problem highly challenging. While advances in machine learning and image feature representations have led to great progress in 2D pattern recognition approaches, recent work suggests that large gains can be made by acknowledging that objects live in a physical, three-dimensional world. When modeling scenes, objects and their relations in 3D, we must answer several fundamental questions. How can we effectively learn 3D object representations from images or video? What level of supervision is required? How can we infer spatial knowledge of the scene and use it to aid in recognition? How can both depth sensors and RGB data be used to enable more descriptive representations for scenes and objects?

After the success of the 3dRR workshop during the past ICCV 07 and ICCV09, we are pleased to organize the third edition of 3dRR in conjunction with ICCV 2011. This workshop will represent a great opportunity to bring together experts from multiple areas of computer vision and provide an arena for stimulating debate. Specific questions we aim to address include:

Object Representation
- What are suitable representations of the 3D geometry of object instances or classes which can be exploited for recognition?
- Can we expand known 2D spatial models (e.g. constellation models) to 3D?

Kinect: Combining Depth and RGB Sensors
- How can we represent and recognize object categories using both RGB and depth sensors?
- How can we estimate scene surfaces and physical interactions?
- How can depth and RGB data help extract object functional parts and affordances?

Reconstruction and Recognition
- Can recognition and reconstruction be run simultaneously to enhance each other?
- How much does 3D information help?
- How detailed does the 3D representation need to be in order to achieve satisfactory recognition?

Spatial Inference
- How can we represent and infer the depth and orientation of surfaces and free space in indoor and outdoor scenes?
- How can alternative representations, such as depth maps and surface layout estimates, be combined to improve robustness?

Spatial constraints and contextual recognition
- How can we use/explore different degrees of 3D spatial constraints (e.g. ground plane) for recognition?
- How can 3D spatial constraints be used for joint recognition of scenes and the objects within?

Human vision
- What can we learn from what we know about our own visual system? How do we humans represent 3D objects or the 3D environment? Can this inspire computational work?

Last modified: 2011-09-13 12:45:01