HPC4BD 2015 - 2nd International Workshop on High Performance Computing for Big Data
Topics/Call fo Papers
Processing large datasets for extracting information and knowledge has always been a fundamental problem. Today this problem is further exacerbated, as the data a researcher or a company needs to cope with can be immense in terms of volume, distributed in terms of location, and unstructured in terms of format. Recent advances in computer hardware and storage technologies have allowed us to gather, store, and analyze such large-scale data. However, without scalable and cost effective algorithms that utilize the resources in an efficient way, neither the resources nor the data itself can serve to science and society at its full potential.
Analyzing Big Data requires a vast amount of storage and computing resources. We need to untangle the big, puzzling information we have and while doing this, we need to be fast and robust: the information we need may be crucial for a life-or-death situation. We need to be accurate: a single misleading information extracted from the data can cause an avalanche effect. Each problem has its own characteristic and priorities. Hence, the best algorithm and architecture combination is different for different applications.
This workshop aims to bring people who work on data-intensive and high performance computing in industry, research labs, and academia together to share their problems posed by the Big Data in various application domains and knowledge required to solve them.
All novel data-intensive computing techniques, data storage and integration schemes, and algorithms for cutting-edge high performance computing architectures which targets the utilization of Big Data are of interest to the workshop. Examples of topics include but not limited to
?parallel algorithms for data-intensive applications,
?scalable data and text mining andd information retrieval,
?using Hadoop and MapReduce to analyze Big Data,
?energy-efficient data-intensive computing,
?querying and visualization of large network datasets,
?processing large-scale datasets on clusters of multicore and manycore processors, and accelerators,
?heterogeneous computing for Big Data architectures,
?Big Data in the Cloud,
?processing and analyzing high-resolution images using high-performance computing,
?using hybrid infrastructures for Big Data analysis.
Analyzing Big Data requires a vast amount of storage and computing resources. We need to untangle the big, puzzling information we have and while doing this, we need to be fast and robust: the information we need may be crucial for a life-or-death situation. We need to be accurate: a single misleading information extracted from the data can cause an avalanche effect. Each problem has its own characteristic and priorities. Hence, the best algorithm and architecture combination is different for different applications.
This workshop aims to bring people who work on data-intensive and high performance computing in industry, research labs, and academia together to share their problems posed by the Big Data in various application domains and knowledge required to solve them.
All novel data-intensive computing techniques, data storage and integration schemes, and algorithms for cutting-edge high performance computing architectures which targets the utilization of Big Data are of interest to the workshop. Examples of topics include but not limited to
?parallel algorithms for data-intensive applications,
?scalable data and text mining andd information retrieval,
?using Hadoop and MapReduce to analyze Big Data,
?energy-efficient data-intensive computing,
?querying and visualization of large network datasets,
?processing large-scale datasets on clusters of multicore and manycore processors, and accelerators,
?heterogeneous computing for Big Data architectures,
?Big Data in the Cloud,
?processing and analyzing high-resolution images using high-performance computing,
?using hybrid infrastructures for Big Data analysis.
Other CFPs
Last modified: 2015-02-03 12:14:33