PCNN 2019 - 4th International Workshop on Parallel Computations for Neural Networks (PCNN 2019)
Date2019-07-15 - 2019-07-19
Deadline2019-04-01
VenueDublin, Ireland
Keywords
Websitehttps://hpcs2019.cisedu.info
Topics/Call fo Papers
The track on Parallel Computations for Neural Networks provides an international forum for reporting progress and recent advances in parallel neural computing techniques, hardware and software tools for speeding-up functioning and training of traditional and bio-inspired neural network models on the modern high-performance computing systems, reconfigurable computing, data centers, and Cloud.
The PCNN Workshop topics include (but are not limited to) the following:
Parallel supervised, unsupervised, deep learning, and reinforcement learning algorithms for neural networks (NN) in future Datacenter and HPC systems
Scalable training and inference schema, including novel parallel Stochastic Gradient Descent (SGD) methods
Scalable Spiking NNs and Parallel NN ensembles incl. Generative Adversarial NNs
Deep Neural Networks (DNNs)
Machine Learning in HPC: Optimized networks, high-performance transports and protocols for NN, e.g., Infiniband, CEE, OmniPath, NVLink, MPI, RDMA etc.
Approximate computing
Big data / Cloud frameworks applied to scalable NNs, e.g., Hadoop/MapReduce, Spark, Flink etc.
Parallelization & distribution of NN algorithms on many-core systems, HPC clusters, grids, datacenters and cloud
NN accelerators: Neuromorphic processors, CPU-, GPU-, TPU- and FPGA-implementation for distributed NNs
Computational neuroscience and models using parallel / distributed NN architectures
Neurodynamics, Complex Systems, and Chaos
Modeling of large-scale neural models using parallel computing techniques
Mixture Models, Graphical Models, Topic Models and Gaussian Processes
Neural network simulation tools and libraries, including neural simulators in neuroscience
Novel Approaches and Applications
The PCNN Workshop topics include (but are not limited to) the following:
Parallel supervised, unsupervised, deep learning, and reinforcement learning algorithms for neural networks (NN) in future Datacenter and HPC systems
Scalable training and inference schema, including novel parallel Stochastic Gradient Descent (SGD) methods
Scalable Spiking NNs and Parallel NN ensembles incl. Generative Adversarial NNs
Deep Neural Networks (DNNs)
Machine Learning in HPC: Optimized networks, high-performance transports and protocols for NN, e.g., Infiniband, CEE, OmniPath, NVLink, MPI, RDMA etc.
Approximate computing
Big data / Cloud frameworks applied to scalable NNs, e.g., Hadoop/MapReduce, Spark, Flink etc.
Parallelization & distribution of NN algorithms on many-core systems, HPC clusters, grids, datacenters and cloud
NN accelerators: Neuromorphic processors, CPU-, GPU-, TPU- and FPGA-implementation for distributed NNs
Computational neuroscience and models using parallel / distributed NN architectures
Neurodynamics, Complex Systems, and Chaos
Modeling of large-scale neural models using parallel computing techniques
Mixture Models, Graphical Models, Topic Models and Gaussian Processes
Neural network simulation tools and libraries, including neural simulators in neuroscience
Novel Approaches and Applications
Other CFPs
- The 5th International Workshop on Modeling and Simulation of Parallel and Distributed Systems (MSPDS 2019)
- International Workshop on Architecture-Aware Simulation and Computing (AASC 2019)
- 9th International Workshop on High Performance and Dynamic Reconfigurable Systems and Networks (DRSN 2019)
- International Workshop on High Performance Computing Systems for Biomedical, Bioinformatics and Life Sciences (BILIS 2019)
- 4th International Workshop on High Performance Computing and Simulations for Weather, Climate, and Solid Earth Sciences (HPC-WCES 2019)
Last modified: 2019-03-17 12:54:13