TASK-CV 2015 - 2nd Workshop on Transferring and Adapting Source Knowledge in Computer Vision
Topics/Call fo Papers
This workshop aims at bringing together computer vision researchers interested in the domain adaptation and knowledge transfer techniques, which are receiving increasing attention in computer vision research.
During the first decade of the XXI century, progress in machine learning has had an enormous impact in computer vision. The ability to learn models from data has been a fundamental paradigm in image classification, object detection, semantic segmentation or tracking.
A key ingredient of such a success has been the availability of visual data with annotations, both for training and testing, and well-established protocols for evaluating the results.
However, most of the time, annotating visual information is a tiresome human activity prone to errors. This represents a limitation for addressing new tasks and/or operating in new domains. In order to scale to such situations, it is worth finding mechanisms to reuse the available annotations or the models learned from them.
This aim challenges machine learning theory since its more traditional corpus corresponds to situations where there are sufficient labeled data of each task, and the training data distribution matches the test distribution.
Therefore, transferring and adapting source knowledge (in the form of annotated data or learned models) to perform new tasks and/or operating in new domains has recently emerged as a challenge to develop computer vision methods that are reliable across domains and tasks.
Accordingly, TASK-CV aims to bring together research in transfer learning and domain adaptation for computer vision as a workshop hosted by the ICCV 2015. We invite the submission of research contributions such as:
TL/DA learning methods for challenging paradigms like unsupervised, and incremental or online learning.
TL/DA focusing on specific visual features, models or learning algorithms.
TL/DA jointly applied with other learning paradigms such as reinforcement learning.
TL/DA in the era of convolutional neural networks (CNNs), adaptation effects of fine-tuning, regularization techniques, transfer of architectures and weights, etc.
TL/DA focusing on specific computer vision tasks (e.g., image classification, object detection, semantic segmentation, recognition, retrieval, tracking, etc.) and applications (biomedical, robotics, multimedia, autonomous driving, etc.).
Comparative studies of different TL/DA methods.
Working frameworks with appropriate CV-oriented datasets and evaluation protocols to assess TL/DA methods.
Transferring part representations between categories.
Transferring tasks to new domains.
Solving domain shift due to sensor differences (e.g., low-vs-high resolution, power spectrum sensitivity) and compression schemes.
Datasets and protocols for evaluating TL/DA methods. This is not a closed list; thus, we welcome other interesting and relevant research for TASK-CV.
During the first decade of the XXI century, progress in machine learning has had an enormous impact in computer vision. The ability to learn models from data has been a fundamental paradigm in image classification, object detection, semantic segmentation or tracking.
A key ingredient of such a success has been the availability of visual data with annotations, both for training and testing, and well-established protocols for evaluating the results.
However, most of the time, annotating visual information is a tiresome human activity prone to errors. This represents a limitation for addressing new tasks and/or operating in new domains. In order to scale to such situations, it is worth finding mechanisms to reuse the available annotations or the models learned from them.
This aim challenges machine learning theory since its more traditional corpus corresponds to situations where there are sufficient labeled data of each task, and the training data distribution matches the test distribution.
Therefore, transferring and adapting source knowledge (in the form of annotated data or learned models) to perform new tasks and/or operating in new domains has recently emerged as a challenge to develop computer vision methods that are reliable across domains and tasks.
Accordingly, TASK-CV aims to bring together research in transfer learning and domain adaptation for computer vision as a workshop hosted by the ICCV 2015. We invite the submission of research contributions such as:
TL/DA learning methods for challenging paradigms like unsupervised, and incremental or online learning.
TL/DA focusing on specific visual features, models or learning algorithms.
TL/DA jointly applied with other learning paradigms such as reinforcement learning.
TL/DA in the era of convolutional neural networks (CNNs), adaptation effects of fine-tuning, regularization techniques, transfer of architectures and weights, etc.
TL/DA focusing on specific computer vision tasks (e.g., image classification, object detection, semantic segmentation, recognition, retrieval, tracking, etc.) and applications (biomedical, robotics, multimedia, autonomous driving, etc.).
Comparative studies of different TL/DA methods.
Working frameworks with appropriate CV-oriented datasets and evaluation protocols to assess TL/DA methods.
Transferring part representations between categories.
Transferring tasks to new domains.
Solving domain shift due to sensor differences (e.g., low-vs-high resolution, power spectrum sensitivity) and compression schemes.
Datasets and protocols for evaluating TL/DA methods. This is not a closed list; thus, we welcome other interesting and relevant research for TASK-CV.
Other CFPs
- 2015 International Conference on Social Informatics
- The First Workshop on HPC Power Management: Measuring Effectiveness
- ISETE-International Conference on Emerging Trends in Engineering and Technology(ICET-15)
- ISETE-International Conference on Emerging Trends in Engineering and Technology(ICET-15)
- ISETE-International Conference on Current Advances in Electronics, Electrical and Computer Science(ICEECS-15)
Last modified: 2015-07-30 22:04:09