HPBDC 2017 - 2017 IEEE International Workshop on High-Performance Big Data Computing
Topics/Call fo Papers
Managing and processing large volumes of data, or “Big Data”, and gaining meaningful insights is a significant challenge facing the distributed computing community. This has significant impact on a wide range of domains including health care, bio-medical research, Internet search, finance and business informatics, and scientific computing. As data-gathering technologies and data-sources witness an explosion in the amount of input data, it is expected that in the future massive quantities of data in the order of hundreds or thousands of petabytes will need to be processed. Thus, it is critical that data-intensive computing middleware (such as Hadoop, HBase and Spark) to process such data are diligently designed, with high performance and scalability, in order to meet the growing demands of such Big Data applications.
The explosive growth of Big Data has caused many industrial firms to adopt High Performance Computing (HPC) technologies to meet the requirements of huge amount of data to be processed and stored. Modern HPC systems and the associated middleware (such as MPI and Parallel File systems) have been exploiting the advances in HPC technologies (multi/many-core architectures, accelerators, RDMA-enabled networking, NVRAMs and SSDs) during the last decade. However, Big Data middleware (such as Hadoop, HBase and Spark) have not embraced such technologies. These disparities are taking HPC and Big Data processing into ‘divergent trajectories’.
International Workshop on High-Performance Big Data Computing (HPBDC), aims to bring HPC and Big Data processing into a ‘convergent trajectory’. The workshop provides a forum for scientists and engineers in academia and industry to present their latest research findings on major and emerging topics in this field.
HPBDC 2017 will be held in conjunction with the 31st IEEE International Parallel and Distributed Processing Symposium (IPDPS 2017), Orlando, Florida USA, Monday, May 29th, 2017.
The explosive growth of Big Data has caused many industrial firms to adopt High Performance Computing (HPC) technologies to meet the requirements of huge amount of data to be processed and stored. Modern HPC systems and the associated middleware (such as MPI and Parallel File systems) have been exploiting the advances in HPC technologies (multi/many-core architectures, accelerators, RDMA-enabled networking, NVRAMs and SSDs) during the last decade. However, Big Data middleware (such as Hadoop, HBase and Spark) have not embraced such technologies. These disparities are taking HPC and Big Data processing into ‘divergent trajectories’.
International Workshop on High-Performance Big Data Computing (HPBDC), aims to bring HPC and Big Data processing into a ‘convergent trajectory’. The workshop provides a forum for scientists and engineers in academia and industry to present their latest research findings on major and emerging topics in this field.
HPBDC 2017 will be held in conjunction with the 31st IEEE International Parallel and Distributed Processing Symposium (IPDPS 2017), Orlando, Florida USA, Monday, May 29th, 2017.
Other CFPs
- 19th Workshop on Advances in Parallel and Distributed Computational Models
- 22ND INTERNATIONAL WORKSHOP ON HIGH-LEVEL PARALLEL PROGRAMMING MODELS AND SUPPORTIVE ENVIRONMENTS
- International Workshop on Graph Algorithms Building Blocks (GABB’2017)
- 7th IEEE Workshop Parallel / Distributed Computing and Optimization (PDCO 2017)
- 6th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics
Last modified: 2016-11-16 12:01:07