ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

IRW 2015 - Inverse Rendering Workshop

Date2015-12-17 - 2015-12-18

Deadline2015-08-16

VenueSantiago de, Chile Chile

Keywords

Websitehttp://cvlab-dresden.de/IR2015

Topics/Call fo Papers

When a camera measures an intensity for a pixel, this intensity depends on a variety of factors: The camera acquisition parameters, the 3D scene geometry and surface normals, material as well as the illumination environment. This process is mathematically described by the rendering equation, which is used in computer graphics to generate naturalistic images. It requires for a fixed time instant that geometry, material and illumination conditions are given. In computer vision, we want to “inverse render” a scene: Given one or multiple images of a dynamic scene we aim to recover depth, material, illumination conditions, and 3D motion. In the past, many approaches have tackled this problem in isolation - such as depth estimation with known material and known illumination conditions - or making simplifying assumptions about the world - such as estimating depth by assuming a Lambertian reflectance model. While “Inverse Rendering” is a challenging task and, in isolation, has been addressed since the early days of computer vision, starting as early as with the work of David Marr, we believe that the time is ripe to revisit the task as a whole. On one hand, provided with enough sensor input, impressive results have already been achieved. On the other hand, humans can solve this task to a remarkable extent from little visual information. Given the recent success of learning-based techniques, also in the field of physics-based computer vision, we strongly believe that the challenge of “Inverse Rendering from a few images” can begin to be addressed.
To summarize, we define the scope of this “Inverse Rendering” workshop as follows: given one or multiple images of a dynamic scene, we want to recover all (visible) physical properties of the scene, such as dense motion, depth, material, and illumination conditions. The workshop offers a meeting and discussion platform for researchers with different and diverse backgrounds, such as in computer graphics, computer vision, optimization, and machine learning. This will hopefully push the state-of-the-art in “Inverse Rendering” with respect to models, methods, and data. Paper submission to this workshop will be solicited in the areas of
Joint models for estimating scene properties
Motion and shape estimation under challenging material and/or lighting conditions
Illumination estimation
Shape-from-X in real world settings
Transparent and reflective scene recovery
Material capturing
Ground truth data and reference data for Inverse Rendering

Last modified: 2015-07-30 22:08:44