BigDataCloud 2015 - 4th Workshop on Big Data Management in Clouds
Topics/Call fo Papers
The fourth edition of the Workshop on Big Data Management in Clouds will be held in Vienna, Austria. BigDataCloud 2015 follows the successful previous editions held in conjunction with EuroPar. Its goal is to aggregate the data management and Clouds / Grids / P2P communities in order to complement the Big Data handling issues with a comprehensive system / infrastructure perspective.
Workshop Description
As data volumes increase at exponential speed in more and more application fields of science, the challenges posed by handling Big Data gain an increasing importance. Large scientific experiments, such as climate modelling, genome mapping, and high-energy physics simulations generate data volumes reaching petabytes per year, further used for real-time or offline processing. Initially designed for powerful and expensive supercomputers, such applications have seen an increasing adoption on clouds, exploiting their elasticity and economical model.
However, running such applications in an efficient fashion on clouds is challenging. One such open challenge is how to handle this “data deluge”. Sharing, disseminating and analyzing large data sets has become a critical issue despite the deployment of petascale computing systems, and optical networking speeds reaching up to 100 Gbps. While Map/Reduce covers a large fraction of the development space, there are still many applications that are better served by other models and systems. In such a context, we need to embrace new programming models, scheduling schemes, hybrid infrastructures and scale out of single datacenters to geographically distributed deployments in order to cope with these new challenges effectively.
The BigDataCloud workshop provides a platform for the dissemination of recent research efforts that explicitly aim at addressing these challenges. It supports the presentation of advanced solutions for the efficient management of Big Data in the context of Cloud computing, new development and deployment efforts in running data-intensive computing workloads. In particular, we are interested in how the use of Cloud-based technologies can meet the data intensive scientific challenges of HPC applications that are not well served by the current supercomputers or grids, and are being ported to Cloud platforms. The goal of the workshop is to support the assessment of the current state, introduce future directions, and present architectures and services for future Clouds supporting data intensive computing.
Workshop Description
As data volumes increase at exponential speed in more and more application fields of science, the challenges posed by handling Big Data gain an increasing importance. Large scientific experiments, such as climate modelling, genome mapping, and high-energy physics simulations generate data volumes reaching petabytes per year, further used for real-time or offline processing. Initially designed for powerful and expensive supercomputers, such applications have seen an increasing adoption on clouds, exploiting their elasticity and economical model.
However, running such applications in an efficient fashion on clouds is challenging. One such open challenge is how to handle this “data deluge”. Sharing, disseminating and analyzing large data sets has become a critical issue despite the deployment of petascale computing systems, and optical networking speeds reaching up to 100 Gbps. While Map/Reduce covers a large fraction of the development space, there are still many applications that are better served by other models and systems. In such a context, we need to embrace new programming models, scheduling schemes, hybrid infrastructures and scale out of single datacenters to geographically distributed deployments in order to cope with these new challenges effectively.
The BigDataCloud workshop provides a platform for the dissemination of recent research efforts that explicitly aim at addressing these challenges. It supports the presentation of advanced solutions for the efficient management of Big Data in the context of Cloud computing, new development and deployment efforts in running data-intensive computing workloads. In particular, we are interested in how the use of Cloud-based technologies can meet the data intensive scientific challenges of HPC applications that are not well served by the current supercomputers or grids, and are being ported to Cloud platforms. The goal of the workshop is to support the assessment of the current state, introduce future directions, and present architectures and services for future Clouds supporting data intensive computing.
Other CFPs
- 10th Conference on Telecommunications
- Tenth International Conference on Knowledge, Information and Creativity Support Systems
- Workshop on e-Science and HPC (eHPC 2015)
- 12th International Joint Conference on Computer Science and Software Engineering
- 11th IEEE International Conference on Wireless and Mobile Computing, Networking and Communications
Last modified: 2015-03-12 23:42:38