PCNN 2018 - International Workshop on Parallel Computations for Neural Networks (PCNN 2018)
Date2018-07-16 - 2018-07-20
Deadline2018-04-01
VenueOrléans, France
Keywords
Websitehttps://hpcs2018.cisedu.info
Topics/Call fo Papers
The track on Parallel Computations for Neural Networks provides an international forum for reporting progress and recent advances in parallel neural computing techniques, hardware and software tools for speeding-up functioning and training of traditional and bio-inspired neural network models on the modern high-performance computing systems, reconfigurable computing, data centers, and Cloud.
The PCNN Workshop topics include (but are not limited to) the following:
Parallel supervised, unsupervised, deep learning, and reinforcement learning algorithms for neural networks (NN) in future Datacenter and HPC systems
Scalable training and inference schema, including novel parallel Stochastic Gradient Descent (SGD) methods
Scalable Spiking NNs and Parallel NN ensembles incl. Generative Adversarial NNs
Deep Neural Networks (DNNs)
Machine Learning in HPC: Optimized networks, high-performance transports and protocols for NN, e.g., Infiniband, CEE, OmniPath, NVLink, MPI, RDMA etc.
Approximate computing
Big data / Cloud frameworks applied to scalable NNs, e.g., Hadoop/MapReduce, Spark, Flink etc.
Parallelization & distribution of NN algorithms on many-core systems, HPC clusters, grids, datacenters and cloud
NN accelerators: Neuromorphic processors, CPU-, GPU-, TPU- and FPGA-implementation for distributed NNs
Computational neuroscience and models using parallel / distributed NN architectures
Neurodynamics, Complex Systems, and Chaos
Modeling of large-scale neural models using parallel computing techniques
Mixture Models, Graphical Models, Topic Models and Gaussian Processes
Neural network simulation tools and libraries, including neural simulators in neuroscience
Novel Approaches and Applications
The PCNN Workshop topics include (but are not limited to) the following:
Parallel supervised, unsupervised, deep learning, and reinforcement learning algorithms for neural networks (NN) in future Datacenter and HPC systems
Scalable training and inference schema, including novel parallel Stochastic Gradient Descent (SGD) methods
Scalable Spiking NNs and Parallel NN ensembles incl. Generative Adversarial NNs
Deep Neural Networks (DNNs)
Machine Learning in HPC: Optimized networks, high-performance transports and protocols for NN, e.g., Infiniband, CEE, OmniPath, NVLink, MPI, RDMA etc.
Approximate computing
Big data / Cloud frameworks applied to scalable NNs, e.g., Hadoop/MapReduce, Spark, Flink etc.
Parallelization & distribution of NN algorithms on many-core systems, HPC clusters, grids, datacenters and cloud
NN accelerators: Neuromorphic processors, CPU-, GPU-, TPU- and FPGA-implementation for distributed NNs
Computational neuroscience and models using parallel / distributed NN architectures
Neurodynamics, Complex Systems, and Chaos
Modeling of large-scale neural models using parallel computing techniques
Mixture Models, Graphical Models, Topic Models and Gaussian Processes
Neural network simulation tools and libraries, including neural simulators in neuroscience
Novel Approaches and Applications
Other CFPs
- International Workshop on Architecture-Aware Simulation and Computing (AASC 2018)
- 8th International Workshop on High Performance Dynamic Reconfigurable Systems and Networks (DRSN 2018)
- International Workshop on High Performance Computing Systems for Biomedical, Bioinformatics and Life Sciences (BILIS 2018)
- 4th International Workshop on High Performance Computing for Weather, Climate, and Solid Earth Sciences (HPC-WCES 2018)
- 8th International Workshop on Machine Learning, Pattern Recognition & Applications (MLPRA 2018)
Last modified: 2018-03-04 16:30:01