SCRAMBL 2017 - 3rd International Workshop on Scalable Computing For Real-Time Big Data Applications
Topics/Call fo Papers
1. Scientific objective
The continuous growth of our society has led to complex systems of behavior and also to the need to optimize certain aspects of our day to day activities. Time sensitive applications such as real time power management for smart grids, traffic control or network monitoring require on demand large scale information processing and real time responses. The data these applications gather on a regular basis from monitoring sensors, social networks and the web in general exceeds the normal storage and compute capacity of even the largest datacenters. Thus, a crucial aspect of real time big data analytics is the ability to constantly transform and summarize the data on its path from the source where it is generated up to the end-points of the workflows that yield the end result. However, such an ability introduces several difficult challenges: data heterogeneity (i.e. variability), data quality (missing/approximate), data temporality (i.e. high-velocity), versioning and historic access requirements (e.g. time window to analyze), etc. How to deal with such challenges is not well understood, both in terms of what patterns emerge at application level, as well as how to leverage the capabilities of state-of-art large-scale computing platforms such as clouds and supercomputing infrastructure. As a consequence, this workshop aims to bridge the gap between infrastructures and real-time big data analytics patterns by focusing on how to design scalable models, algorithms and runtime systems that address the aforementioned challenges.
2. Workshop focus
SCRAMBL provides an ideal venue for practitioners, researchers, developers, and industrial/governmental partners to come together, present and discuss leading research results, use cases, innovative ideas, challenges, and opportunities that arise from adopting real-time big data analytics at large scale. System-level researchers and practitioners active in the area of high performance computing, cloud computing and distributed systems in general will have an opportunity to understand what patterns, use cases and challenges arise at application level in the context of real-time big data analytics, which is of consequence in the design of the infrastructure and runtime systems. In turn, application-level researchers will have an opportunity to learn more about new low-level technologies and runtime system that facilitate easier design of algorithms, workflows and parallelization. Finally, an another important category are the use case providers, be they industrial, government or academia, which will have the opportunity to learn about the options and feasibility of implementing their desired real-time big data analytics solution in practice.
The continuous growth of our society has led to complex systems of behavior and also to the need to optimize certain aspects of our day to day activities. Time sensitive applications such as real time power management for smart grids, traffic control or network monitoring require on demand large scale information processing and real time responses. The data these applications gather on a regular basis from monitoring sensors, social networks and the web in general exceeds the normal storage and compute capacity of even the largest datacenters. Thus, a crucial aspect of real time big data analytics is the ability to constantly transform and summarize the data on its path from the source where it is generated up to the end-points of the workflows that yield the end result. However, such an ability introduces several difficult challenges: data heterogeneity (i.e. variability), data quality (missing/approximate), data temporality (i.e. high-velocity), versioning and historic access requirements (e.g. time window to analyze), etc. How to deal with such challenges is not well understood, both in terms of what patterns emerge at application level, as well as how to leverage the capabilities of state-of-art large-scale computing platforms such as clouds and supercomputing infrastructure. As a consequence, this workshop aims to bridge the gap between infrastructures and real-time big data analytics patterns by focusing on how to design scalable models, algorithms and runtime systems that address the aforementioned challenges.
2. Workshop focus
SCRAMBL provides an ideal venue for practitioners, researchers, developers, and industrial/governmental partners to come together, present and discuss leading research results, use cases, innovative ideas, challenges, and opportunities that arise from adopting real-time big data analytics at large scale. System-level researchers and practitioners active in the area of high performance computing, cloud computing and distributed systems in general will have an opportunity to understand what patterns, use cases and challenges arise at application level in the context of real-time big data analytics, which is of consequence in the design of the infrastructure and runtime systems. In turn, application-level researchers will have an opportunity to learn more about new low-level technologies and runtime system that facilitate easier design of algorithms, workflows and parallelization. Finally, an another important category are the use case providers, be they industrial, government or academia, which will have the opportunity to learn about the options and feasibility of implementing their desired real-time big data analytics solution in practice.
Other CFPs
- Workshop on Clusters, Clouds and Grids for Life Sciences (CCGrid ? Life 2017)
- 2nd International Workshop on Distributed Big Data Management (DBDM 2017)
- Workshop on the Integration of Extreme Scale Computing and Big Data Management and Analytics (EBDMA 2017)
- Workshop on Optimisation techniques for Resource Management and Application orchestration in Cloud Computing
- Sixth IEEE International Workshop on Cloud Computing Interclouds, Multiclouds, Federations, and Interoperability
Last modified: 2016-10-29 10:19:33