GTWD 2013 - Workshop on Ground Truth - What is a good dataset
Topics/Call fo Papers
Among the many challenges in performance analysis in computer vision, ground truth acquisition is the most pressing, as all investigations into the properties of algorithms will be based on assumptions on which type of data they are run.
But generating good ground truth is a challenging task on its own. Our workshop aims at generating awareness for the scientific challenge of ground truth generation.
Goals and topics
Submissions dealing with any of the following largely unanswered questions as well as related ones are welcome:
Can we trust synthetically rendered depth or RGB sequences?
How do we obtain geometry, materials, textures and animations for such datasets?
Do we have good enough camera models (ToF, stereo, RGB, etc) to synthesize realistic noise?
Can we trust human annotations as ground truth?
Can we use measurement sciences to create ground truth for real scenes?
In general, for which applications which accuracy of ground truth do we need?
How can applications deal with missing and inaccurate ground truth data?
Can we bootstrap ground truth with vision methods using more data?
What constitutes a good ground truth dataset?
When do we have enough ground truth?
Given a real application, which ground truth dataset is the best for studying the performance?
Can we enable anybody to quickly generate ground truth for her own application?
But generating good ground truth is a challenging task on its own. Our workshop aims at generating awareness for the scientific challenge of ground truth generation.
Goals and topics
Submissions dealing with any of the following largely unanswered questions as well as related ones are welcome:
Can we trust synthetically rendered depth or RGB sequences?
How do we obtain geometry, materials, textures and animations for such datasets?
Do we have good enough camera models (ToF, stereo, RGB, etc) to synthesize realistic noise?
Can we trust human annotations as ground truth?
Can we use measurement sciences to create ground truth for real scenes?
In general, for which applications which accuracy of ground truth do we need?
How can applications deal with missing and inaccurate ground truth data?
Can we bootstrap ground truth with vision methods using more data?
What constitutes a good ground truth dataset?
When do we have enough ground truth?
Given a real application, which ground truth dataset is the best for studying the performance?
Can we enable anybody to quickly generate ground truth for her own application?
Other CFPs
- The Second Workshop on Fine-Grained Visual Categorization (FGVC2)
- 1st IEEE International Workshop on Computer Vision in Sports (CVsports)
- 2nd IEEE International Workshop on Computational Cameras and Displays
- The third IEEE Workshop on Camera Networks and Wide-Area Scene
- The 5 th IEEE International Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2013)
Last modified: 2013-03-05 07:29:31