ESCAPE 2012 - The Second International Workshop on Extreme Scale Computing Application Enablement - Modeling and Tools (ESCAPE-2012)
Topics/Call fo Papers
The Second International Workshop on Extreme Scale Computing APplication Enablement - Modeling and Tools (ESCAPE)
to be held in conjunction with
The 14th International Conference on High Performance Computing and Communications (HPCC 2012)
Liverpool, UK. June 25-27, 2012
Silicon based technology has advanced in the past two decades as Moore's law predicted. A large number of computing cores, complex memory subsystems, and I/O devices are now packed onto modern VLSI chips such that the computational power per unit area has grown tremendously. To deliver petascale performance today, huge amounts of resources are being deployed via interconnected computing units, while programming languages and paradigms are being pushed beyond their initial designs to explore maximal parallel efficiency. As a consequence, work on application enablement has become an extremely challenging task. Furthermore, in the foreseeable future, exa- and higher scale computing systems are expected to incorporate even more massive numbers of multi-core processors - referred to here as - extreme scale computing - systems, which will pose critical challenges for enabling, optimizing and tuning applications on these systems. Therefore, there will be an increasingly urgent need to have efficient performance modeling, tuning, and enablement tools that address such challenges in extreme scale computing.
Various performance tool suites have been developed in the high performance computing community to successfully monitor, control, and scale up the performance of large scale scientific applications. As their overhead grows with the number of monitored components, in addition to the growing complexity of the system itself, the scalability of these performance tools on extreme-scale systems is approaching critical limits. On the other hand, abundant computational resources in extreme scale computing can be leveraged for performance analysis and tuning, especially when applications often utilize a subset of computational resources. For example, the memory bound application can leverage idle computing cores for performance evaluation and tuning. Moreover, one obvious analysis and tuning advantage provided by extreme scale computing is to readily apply asymptotic analysis, which usually requires far less computational intensity. Extreme scale computing presents both challenges in application enablement and opportunities to be explored.
The goal of this workshop is to promote community-wide discussion identifying the methodologies, analysis, and software tools, which can enable effective and scalable performance evaluation for extreme scale computing systems. We seek submissions of papers that invent new techniques, introduce new methodologies, propose new research directions and discuss approaches for unsolved issues.
The topics of this workshop are related to methodological aspects as well as practical of the application enablement in extreme scale computing. Topics of interest include (but are not limited to)
Performance modeling for all aspects of extreme scale systems, e.g., cores, memory, interconnect and I/O, etc.
Algorithms for self-tuning and -optimization of applications in extreme scale computing
Profiling and debugging tools for applications in extreme scale computing
Performance, tracing and simulation tools with extreme scalability
Methodologies and tools for extreme scale performance evaluation, and verification
Empirical and deployment studies of extreme scale performance tools: challenges, techniques and lessons
to be held in conjunction with
The 14th International Conference on High Performance Computing and Communications (HPCC 2012)
Liverpool, UK. June 25-27, 2012
Silicon based technology has advanced in the past two decades as Moore's law predicted. A large number of computing cores, complex memory subsystems, and I/O devices are now packed onto modern VLSI chips such that the computational power per unit area has grown tremendously. To deliver petascale performance today, huge amounts of resources are being deployed via interconnected computing units, while programming languages and paradigms are being pushed beyond their initial designs to explore maximal parallel efficiency. As a consequence, work on application enablement has become an extremely challenging task. Furthermore, in the foreseeable future, exa- and higher scale computing systems are expected to incorporate even more massive numbers of multi-core processors - referred to here as - extreme scale computing - systems, which will pose critical challenges for enabling, optimizing and tuning applications on these systems. Therefore, there will be an increasingly urgent need to have efficient performance modeling, tuning, and enablement tools that address such challenges in extreme scale computing.
Various performance tool suites have been developed in the high performance computing community to successfully monitor, control, and scale up the performance of large scale scientific applications. As their overhead grows with the number of monitored components, in addition to the growing complexity of the system itself, the scalability of these performance tools on extreme-scale systems is approaching critical limits. On the other hand, abundant computational resources in extreme scale computing can be leveraged for performance analysis and tuning, especially when applications often utilize a subset of computational resources. For example, the memory bound application can leverage idle computing cores for performance evaluation and tuning. Moreover, one obvious analysis and tuning advantage provided by extreme scale computing is to readily apply asymptotic analysis, which usually requires far less computational intensity. Extreme scale computing presents both challenges in application enablement and opportunities to be explored.
The goal of this workshop is to promote community-wide discussion identifying the methodologies, analysis, and software tools, which can enable effective and scalable performance evaluation for extreme scale computing systems. We seek submissions of papers that invent new techniques, introduce new methodologies, propose new research directions and discuss approaches for unsolved issues.
The topics of this workshop are related to methodological aspects as well as practical of the application enablement in extreme scale computing. Topics of interest include (but are not limited to)
Performance modeling for all aspects of extreme scale systems, e.g., cores, memory, interconnect and I/O, etc.
Algorithms for self-tuning and -optimization of applications in extreme scale computing
Profiling and debugging tools for applications in extreme scale computing
Performance, tracing and simulation tools with extreme scalability
Methodologies and tools for extreme scale performance evaluation, and verification
Empirical and deployment studies of extreme scale performance tools: challenges, techniques and lessons
Other CFPs
- The Third International Workshop on Dependable Service-Oriented and Cloud computing (DSOC-2012)
- The Third International Workshop on Frontier of GPU Computing (FGC-2012)
- The Second International Workshop on Sustainable High Performance Cloud Computing (SHPCloud-2012)
- 2nd International Conference on Computer Science and Logistics Engineering (ICCSLE2012)
- The Transition to Adulthood after the Great Recession
Last modified: 2012-03-10 18:32:44