ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

HPCDL 2017 - First IEEE International Workshop on HPC based Deep Learning

Date2017-11-18 - 2017-11-21

Deadline2017-08-07

VenueNew Orleans, USA - United States USA - United States

Keywords

Websitehttps://udi.ornl.gov/hpcdl

Topics/Call fo Papers

In the recent years, deep learning research has progressed significantly in computer vision, speech recognition, and natural language processing. By exploiting more than a single node with one GPU, scaling up deep learning algorithms has generated new insights and capabilities in various areas of science and engineering. Therefore, a significant amount of effort has been put into developing deep learning systems that can scale to very large models and large datasets, which leads to many new research challenges and opportunities. An emerging frontier for accelerating deep learning model training is now to scale across a computing cluster. It is therefore important to develop effective parallel or distributed algorithms that enable training a model with millions trainable parameters with large training sets on computing clusters. Optimizing hyperparameter settings is also a daunting task for deep neural network frameworks. Methods from evolutionary computing including uniform and random sampling can be investigated in order to widen deep learning scalability with high performance computing platforms. Since deep neural networks typically require many computing and file I/O cycles, learning how to continuously scale deep learning by leveraging and coordinating multiple GPUs and high-speed node-to-node (at server level or at cluster-level) communication is crucial to further advance deep learning.
Call for Papers:
The aim of this workshop is to bring together interested researchers from academia, government, and industry working in the field of data mining, machine learning, and high performance computing to address the challenges involved in developing theoretical and application methods with deep learning frameworks capable of assimilating large scale data for actionable intelligence and scientific discoveries. The workshop will provide an interactive forum to engage in discussions, shape the research directions, and disseminate state-of-the-art solutions. The focus is to explore techniques that integrates deep learning approaches with in-memory computing, high performance computing, cloud computing, storage technologies, and data management. Examples of topics include but are not limited to:
MPI-based distributed training
HPC based hyper-parameter tuning
evolutionary computation based high performance learning
optimize I/O and data movement for parallel processing
scalable data analysis and understanding
performance evaluation of HPC based deep learning
energy efficiency issues in deploying deep learning on HPC platforms
hybrid CPU/GPU deep learning framework
applications of HPC based deep learning to real world problems

Last modified: 2017-05-13 11:35:54