IMEV 2014 - Rd International Workshop on Intelligent Mobile and Egocentric Vision (IMEV2014)
Topics/Call fo Papers
Abstract - The goal of IMEV 2014 is to identify and promote novel computer vision algorithms, systems and frameworks, which are particularly suitable for intelligent and interactive information processing on mobile and wearable computing platforms. This is the 3rd workshop in the series: this year, we expand the scope to include egocentric vision, an exciting emerging topic in computer vision. IMEV 2014 aims to bring together researchers to present latest developments and technical solutions in the domain of intelligent mobile and egocentric vision, including novel algorithms, applications, and systems.
Mobile Vision - In the domain of mobile vision, we encourage researchers and engineers to propose interdisciplinary work, which integrates different types of visual information and additional sensors such as GPS, accelerometers and gyroscopes, for novel applications and services on mobile computing platforms. Mobile vision has also received increasing interest from MPEG, the international standards body. Contributions related to ongoing computer vision standards in MPEG: Compact Descriptors for Visual Search (CDVS), and its extension to video, Compact Descriptors for Video Analysis (CDVA) are also welcome.
Research topics relevant to, but not limited to, the following areas are welcome:
・
Feature extraction on mobile devices
・ Motion analysis and recovery for mobile cameras
・ Gesture/object/location recognition with mobile cameras
・ 3D mobile vision
・ Augmented reality on mobile devices
・ Human computer interaction with mobile devices
・ Computer vision applications on hand-held devices
・ Indexing and retrieval of images and videos for mobile devices
・ Multi-sensor integration for mobile vision
・ Hardware and embedded systems for mobile vision
・ Mobile vision incorporated with robot vision
・ Related MPEG standards- CDVS (Compact Descriptors for Visual Search) and CDVA (Compact Descriptors for Video Analysis)
・ Other topics related to mobile vision.
Egocentric Vision - In the context of egocentric vision, devices like Google Glass, allow capturing and recording of rich visual data from an egocentric perspective. Wearable devices provide a unique opportunity to explore how humans understand and interpret visual input from their eyes. The first-person-view (FPV) observations align with a human’s egocentric perspective of the world around him or her, while most existing computer vision technologies are based on fixed cameras or scenes from selected viewing points by photographers. Devices like Google Glass have the potential to change the way we view and interact with things around us. There is much to be done before wearable devices and their applications become widespread. We encourage researchers to make contributions in wearable visual computing, from the varying perspectives of cognitive science, artificial intelligence, computer vision, and machine learning.
Research topics relevant to, but not limited to, the following areas are welcome:
・
Visual feature learning from FPV videos
・ Egocentric video summarization, life-logging
・ Social activity analysis from FPV videos
・ Activity recognition in first-person vision
・ Eye-gaze tracking & attention modeling
・ Object recognition & tracking in FPV videos
・ Scene understanding in first-person video
・ Human Computer Interaction issues in first-person vision
・ Privacy issues in first-person video
・ Other topics related to egocentric vision.
Questions - For any questions or comments about this workshop, please contact Dr. Chu-Song Chen (Email: song-AT-iis.sinica.edu.tw).
Mobile Vision - In the domain of mobile vision, we encourage researchers and engineers to propose interdisciplinary work, which integrates different types of visual information and additional sensors such as GPS, accelerometers and gyroscopes, for novel applications and services on mobile computing platforms. Mobile vision has also received increasing interest from MPEG, the international standards body. Contributions related to ongoing computer vision standards in MPEG: Compact Descriptors for Visual Search (CDVS), and its extension to video, Compact Descriptors for Video Analysis (CDVA) are also welcome.
Research topics relevant to, but not limited to, the following areas are welcome:
・
Feature extraction on mobile devices
・ Motion analysis and recovery for mobile cameras
・ Gesture/object/location recognition with mobile cameras
・ 3D mobile vision
・ Augmented reality on mobile devices
・ Human computer interaction with mobile devices
・ Computer vision applications on hand-held devices
・ Indexing and retrieval of images and videos for mobile devices
・ Multi-sensor integration for mobile vision
・ Hardware and embedded systems for mobile vision
・ Mobile vision incorporated with robot vision
・ Related MPEG standards- CDVS (Compact Descriptors for Visual Search) and CDVA (Compact Descriptors for Video Analysis)
・ Other topics related to mobile vision.
Egocentric Vision - In the context of egocentric vision, devices like Google Glass, allow capturing and recording of rich visual data from an egocentric perspective. Wearable devices provide a unique opportunity to explore how humans understand and interpret visual input from their eyes. The first-person-view (FPV) observations align with a human’s egocentric perspective of the world around him or her, while most existing computer vision technologies are based on fixed cameras or scenes from selected viewing points by photographers. Devices like Google Glass have the potential to change the way we view and interact with things around us. There is much to be done before wearable devices and their applications become widespread. We encourage researchers to make contributions in wearable visual computing, from the varying perspectives of cognitive science, artificial intelligence, computer vision, and machine learning.
Research topics relevant to, but not limited to, the following areas are welcome:
・
Visual feature learning from FPV videos
・ Egocentric video summarization, life-logging
・ Social activity analysis from FPV videos
・ Activity recognition in first-person vision
・ Eye-gaze tracking & attention modeling
・ Object recognition & tracking in FPV videos
・ Scene understanding in first-person video
・ Human Computer Interaction issues in first-person vision
・ Privacy issues in first-person video
・ Other topics related to egocentric vision.
Questions - For any questions or comments about this workshop, please contact Dr. Chu-Song Chen (Email: song-AT-iis.sinica.edu.tw).
Other CFPs
- 3rd ACCV Workshop on e-Heritage
- 1st International Workshop on Feature and Similarity Learning for Computer Vision (FSLCV 2014)
- International Workshop on My Car Has Eyes - Intelligent Vehicles with Vision Technology
- International Workshop on Video Segmentation in Computer Vision
- The 2nd Workshop on User-Centred Computer Vision
Last modified: 2014-05-31 11:13:42