HPC4BD 2016 - International Workshop on High Performance Computing for Big Data (HPC4BD)
Topics/Call fo Papers
Processing large datasets for extracting information and knowledge has always been a fundamental problem. Today this problem is further exacerbated, as the data a researcher or a company needs to cope with can be immense in terms of volume, distributed in terms of location, and unstructured in terms of format. Recent advances in computer hardware and storage technologies have allowed us to gather, store, and analyze such large-scale data. However, without scalable and cost effective algorithms that utilize the resources in an efficient way, neither the resources nor the data itself can serve to science and society at its full potential.
Analyzing Big Data requires a vast amount of storage and computing resources. We need to untangle the big, puzzling information we have and while doing this, we need to be fast and robust: the information we need may be crucial for a life-or-death situation. We need to be accurate: a single misleading information extracted from the data can cause an avalanche effect. Each problem has its own characteristic and priorities. Hence, the best algorithm and architecture combination is different for different applications.
This workshop aims to bring people who work on data-intensive and high performance computing in industry, research labs, and academia together to share their problems posed by the Big Data in various application domains and knowledge required to solve them.
All novel data-intensive computing techniques, data storage and integration schemes, and algorithms for cutting-edge high performance computing architectures which targets the utilization of Big Data are of interest to the workshop. Examples of topics include but not limited to
?parallel algorithms for data-intensive applications,
?scalable data and text mining and information retrieval,
?using Hadoop, MapReduce, Spark, Storm, Streaming to analyze Big Data,
?energy-efficient data-intensive computing,
?deep-learning with massive-scale datasets
?querying and visualization of large network datasets,
?processing large-scale datasets on clusters of multicore and manycore processors, and accelerators,
?heterogeneous computing for Big Data architectures,
?Big Data in the Cloud,
?processing and analyzing high-resolution images using high-performance computing,
?using hybrid infrastructures for Big Data analysis.
Analyzing Big Data requires a vast amount of storage and computing resources. We need to untangle the big, puzzling information we have and while doing this, we need to be fast and robust: the information we need may be crucial for a life-or-death situation. We need to be accurate: a single misleading information extracted from the data can cause an avalanche effect. Each problem has its own characteristic and priorities. Hence, the best algorithm and architecture combination is different for different applications.
This workshop aims to bring people who work on data-intensive and high performance computing in industry, research labs, and academia together to share their problems posed by the Big Data in various application domains and knowledge required to solve them.
All novel data-intensive computing techniques, data storage and integration schemes, and algorithms for cutting-edge high performance computing architectures which targets the utilization of Big Data are of interest to the workshop. Examples of topics include but not limited to
?parallel algorithms for data-intensive applications,
?scalable data and text mining and information retrieval,
?using Hadoop, MapReduce, Spark, Storm, Streaming to analyze Big Data,
?energy-efficient data-intensive computing,
?deep-learning with massive-scale datasets
?querying and visualization of large network datasets,
?processing large-scale datasets on clusters of multicore and manycore processors, and accelerators,
?heterogeneous computing for Big Data architectures,
?Big Data in the Cloud,
?processing and analyzing high-resolution images using high-performance computing,
?using hybrid infrastructures for Big Data analysis.
Other CFPs
- International Workshop on Heterogeneous and Unconventional Cluster Architectures and Applications (HUCAA)
- International Workshop on Interfaces and Architectures for Scientific Data Storage (IASDS)
- International Workshop on Parallel Programming Models and Systems Software for High-End Computing (P2S2)
- International Workshop on Power-aware Algorithms, Systems, and Architectures (PASA)
- International Workshop on Sustainable HPC Cloud (SHPCloud)
Last modified: 2016-01-17 18:31:23