IASDS 2016 - International Workshop on Interfaces and Architectures for Scientific Data Storage (IASDS)
Topics/Call fo Papers
High-performance computing simulations and large scientific experiments generate tens of terabytes of data, and these data sizes grow each year. Existing systems for storing, managing, and analyzing data are being pushed to their limits by these applications, and new techniques are necessary to enable efficient data processing for future simulations and experiments.
The needs of scientific applications have triggered a continuous increase in scale of parallel systems, as demonstrated by the evolution of Top 500. However, the computation scale has not been matched by an increase in I/O scale. For example, in the design of earlier supercomputers the parallel I/O bandwidth was 1Gbps for every TFLOP, whereas in the current solutions is 1Gbps for every 10TFLOPS. This increased bottleneck makes even more pressing the need to employ the I/O subsystem in the most efficient solution possible. Scalable I/O has been already identified as critical for the PFLOP systems. The future exascale systems forecasted for 2018-2020 will presumably have O(1B) cores and will be hierarchical in both platform and algorithms. This hierarchy will imply a larger path in pipelining the data from cores to storage and vice-versa, exposing even more the I/O latency perceived by the current MPP architectures.
This workshop will provide a forum for engineers and scientists to present and discuss their most recent work related to the storage, management, and analysis of data for scientific workloads. Emphasis will be placed on forward-looking approaches to tackle the challenges of storage at extreme scale or to provide better abstractions for use in scientific workloads.
Topics of interest include, but are not limited to:
parallel file systems
scientific databases
active storage
scientific I/O middlewares
extreme scale storage
analysis of large data sets
NoSQL storage solutions and techniques
energy-aware file systems
in-memory storage systems
The needs of scientific applications have triggered a continuous increase in scale of parallel systems, as demonstrated by the evolution of Top 500. However, the computation scale has not been matched by an increase in I/O scale. For example, in the design of earlier supercomputers the parallel I/O bandwidth was 1Gbps for every TFLOP, whereas in the current solutions is 1Gbps for every 10TFLOPS. This increased bottleneck makes even more pressing the need to employ the I/O subsystem in the most efficient solution possible. Scalable I/O has been already identified as critical for the PFLOP systems. The future exascale systems forecasted for 2018-2020 will presumably have O(1B) cores and will be hierarchical in both platform and algorithms. This hierarchy will imply a larger path in pipelining the data from cores to storage and vice-versa, exposing even more the I/O latency perceived by the current MPP architectures.
This workshop will provide a forum for engineers and scientists to present and discuss their most recent work related to the storage, management, and analysis of data for scientific workloads. Emphasis will be placed on forward-looking approaches to tackle the challenges of storage at extreme scale or to provide better abstractions for use in scientific workloads.
Topics of interest include, but are not limited to:
parallel file systems
scientific databases
active storage
scientific I/O middlewares
extreme scale storage
analysis of large data sets
NoSQL storage solutions and techniques
energy-aware file systems
in-memory storage systems
Other CFPs
- International Workshop on Parallel Programming Models and Systems Software for High-End Computing (P2S2)
- International Workshop on Power-aware Algorithms, Systems, and Architectures (PASA)
- International Workshop on Sustainable HPC Cloud (SHPCloud)
- International Workshop on Scheduling and Resource Management for Parallel and Distributed Systems (SRMPDS)
- International Workshop on HW/SW Interface for IoT and Big Data (InterIoT&BigData)
Last modified: 2016-01-17 18:30:15