ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

VSCL 2015 - Special Issue on Visual Saliency Computing and Learning

Date2015-03-10

Deadline2014-09-30

VenueOnline, Online Online

Keywords

Websitehttps://mc.manuscriptcentral.com/tnnls

Topics/Call fo Papers

IEEE Transactions on Neural Networks and Learning Systems
Special Issue on
Visual Saliency Computing and Learning
Vision and multimedia communities have long attempted to understand image or video content in a manner analogous to humans. Humans’ comprehension to an image or a video clip often depends on the objects that draw their attention. As a result, one fundamental and open problem is to automatically infer the user-attracted or interesting areas in an image or a video sequence. Recently, a large number of researchers explore visual saliency models to address this problem. The study on visual saliency models is originally motivated by simulating humans’ bottom-up visual attention and it is mainly based on the biological evidence that humans’ visual attention is automatically attracted by highly salient features in the visual scene which are discriminative with respect to the surrounding environment.
Not surprisingly, this exciting field has gained a lot of research interests. Although there has been significant progress in automatically parsing visual saliency, some challenging problems still remain unsolved. For instance, most proposed models can work well for the simple scenario where a single object stands out from its surrounding whereas they are incapable of handling more realistic scenarios with complex scenes. It is also difficult to determine whether a saliency model behaves in accordance with biological findings or not.
In recent few years, machine learning techniques, such as neural networks, probabilistic graphical models, sparse coding, and kernel machines, have been successfully adapted to the field of visual saliency detection. This type of algorithms learns the saliency from the given scenes automatically and shows strong potential to improve the system robustness. More lately, an emerging machine learning technology called deep learning (a.k.a. deep neural networks) with the hallmark of utilizing many layers of non-linear information processing stages that are hierarchical may offer a new opportunity for saliency detection to tackle the problems mentioned above, because it has powerful modeling and representational capability.
The goal of this special issue is to invite original contributions reporting the latest advances in modeling visual saliency, deep neural networks and its relation to visual saliency, and emerging vision and multimedia applications towards addressing these challenges. It will highlight the usage of advanced machine learning techniques, for example, deep neural networks (DNN), for interpreting the visual saliency. The topics of interest include, but are not limited to:
? Deep neural networks (DNN) (such as deep belief networks (DBN) and convolutional neural networks (CNN)) for visual saliency learning;
? Computational models for eye fixations prediction in image/video;
? Computational models for salient object detection in image/video;
? Extraction of new features or factors that influence visual attention;
? Saliency detection using multimodal data;
? Foundational issues;
? Visual saliency for various applications (e.g., object recognition, video surveillance, mobile robotics, computer games, human-machine interaction, content-based multimedia analysis, and satellite image analysis);
? Novel machine learning techniques for visual saliency;
? Saliency benchmark databases and evaluation metrics;
Important Dates:
Submission of full papers: 30 September 2014
Notification to authors: 30 December 2014
Submission of revised papers: 10 February 2015
Final decision on revised papers: 10 March 2015
Tentative publication date: Fourth quarter 2015
Guest Editors:
Junwei Han Northwestern Polytechnical University, China, junweihan2010-AT-gmail.com
Laurent Itti University of Southern California, USA, itti-AT-pollux.usc.edu
Ling Shao The University of Sheffield, UK, ling.shao-AT-sheffield.ac.uk
Nuno Vasconcelos University of California, San Diego, USA, nvasconcelos-AT-ucsd.edu
Jungong Han Civolution Technology, The Netherlands, jungonghan77-AT-gmail.com
Dong Xu Nanyang Technological University, Singapore, DongXu-AT-ntu.edu.sg
Submission Instructions
1) Read the information for authors at: http://cis.ieee.org/publications.html
2) Submit the manuscript by September 30, 2014 at the TNNLS webpage (http://mc.manuscriptcentral.com/tnnls) (please select “Visual Saliency Computing and Learning” as the submission type). Please send an email to junweihan2010-AT-gmail.com after you submit the paper to the special issue.

Last modified: 2014-07-06 15:42:29