ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

RATNN 2015 - Special session of Randomized Algorithms for Training Neural Networks

Date2015-11-22 - 2015-11-25

Deadline2015-06-01

VenueBangkok, Thailand Thailand

Keywords

Websitehttp://www.ies-2015.org

Topics/Call fo Papers

Randomized algorithms used for design of learning-based intelligent systems have received considerable attention from academics, researchers, and domain workers due to their practical value in producing effective and efficient solutions to problems with large scale data sets and real- time constraint. In the era of big data, such type of learning algorithms become more important, and very likely create a pathway to deal with learning tasks emerging in big data. Neural networks, as a powerful tool for data regression and classification, can be trained by minimizing a loss function through updating the weights and biases. Classical approaches to training neural networks are based on derivatives of the loss function determined with respect to the model parameters (weights and biases). Quite commonly they suffer from local minima, very slow convergence, learning parameter setting and over-fitting caused by the model architecture and/or intensive learning. These factors naturally limit the feasibility of gradient-based algorithms for training neural networks, and eventually block the way to learn from large scale data sets. Randomized algorithms for matrices and data have been well explored in computational mathematics in the past and some remarkable results have been achieved. For training neural networks, these schemes were examined in early 90's, and the most influential approach is the random vector functional-link nets and its variations. However, as can been seen from literature that there are still a large room to be further explored on randomized algorithms for training neural networks. Indeed, many interesting and challenging problems related to random learner models and randomized algorithms require attention and further deep investigations.

Last modified: 2015-02-26 23:35:11