PCNN 2016 - International Workshop on Parallel Computations for Neural Networks (PCNN 2016)
Date2016-07-18 - 2016-07-22
Deadline2016-03-07
VenueInnsbruck, Austria
Keywords
Websitehttps://hpcs2016.cisedu.info
Topics/Call fo Papers
The track on Parallel Computations for Neural Networks provides an international forum for reporting progress and recent advances in parallel computing techniques, hardware and software tools for speeding-up functioning and training of traditional and bio-inspired neural network models on the modern high-performance computing systems.
The PCNN Workshop topics include (but are not limited to) the following:
Specialized computing hardware, transputers and FPGA-implementations for neural networks
Parallel training algorithms for feed-forward, recurrent, RBF, recirculation and other neural networks
Parallel supervised and unsupervised training and reinforcement learning algorithms
Parallelization of neural network algorithms on many-core systems, clusters, grids and clouds
GPU-based implementations of neural networks
Coarse-grain parallelization of neural networks
Parallel training algorithms for deep-belief networks
Parallelization of cognitive neural models
Parallel implementations and training algorithms for spiking neural networks
Grid-based frameworks for neural networks execution and parallelization
Modeling of large-scale neural models using parallel computing techniques
Neural simulators in neuroscience using parallel computing
Computational neuroscience using parallel architectures
Neural network simulation tools and libraries using parallel computing
High-performance machine intelligence
Deep Learning and HPC Systems
The PCNN Workshop topics include (but are not limited to) the following:
Specialized computing hardware, transputers and FPGA-implementations for neural networks
Parallel training algorithms for feed-forward, recurrent, RBF, recirculation and other neural networks
Parallel supervised and unsupervised training and reinforcement learning algorithms
Parallelization of neural network algorithms on many-core systems, clusters, grids and clouds
GPU-based implementations of neural networks
Coarse-grain parallelization of neural networks
Parallel training algorithms for deep-belief networks
Parallelization of cognitive neural models
Parallel implementations and training algorithms for spiking neural networks
Grid-based frameworks for neural networks execution and parallelization
Modeling of large-scale neural models using parallel computing techniques
Neural simulators in neuroscience using parallel computing
Computational neuroscience using parallel architectures
Neural network simulation tools and libraries using parallel computing
High-performance machine intelligence
Deep Learning and HPC Systems
Other CFPs
- International Workshop on Modeling and Simulation of Parallel and Distributed Systems (MSPDS 2016)
- International Workshop on Architecture-Aware Simulation and Computing (AASC 2016)
- International Workshop on High Performance Dynamic Reconfigurable Systems and Networks (DRSN 2016)
- International Workshop on High Performance Computing Systems for Biomedical, Bioinformatics and Life Sciences (BILIS 2016)
- 3rd International Workshop on High Performance Computing for Weather, Climate, and Solid Earth Sciences (HPC-WCES 2016)
Last modified: 2016-01-16 23:29:42