ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

SoOS 2012 - International Workshop On The Growing Problems with Scalable, Heterogeneous Infrastructures

Date2012-07-10

Deadline2012-01-31

VenueMadrid, Spain Spain

Keywords

Websitehttps://arcos.inf.uc3m.es/ispa2012

Topics/Call fo Papers

With the successful implementation of petaflop computing back in 2008 [ref], manufacturers strive to tackling the next barrier: exaflop computing. However, this figure is generally misleading, as the peak performance of a machine is basically just calculated by the number and performance of each individual processing unit in the system ? even the sustained performance tests through LINPACK essentially just stresses the mathematical units and not the interconnects between nodes. In other words, modern performance measurements essentially reflect the size of the system and not so much the efficiency to solve a specific class of problems. With the introduction of multi- and manycore processors, the scale of modern systems increases drastically, even though their effective frequency remains basically unchanged. It is therefore generally predicted that we reach exaflop computing by the end of this decade.
Given the circumstances, the question arises whether it is worth reaching the exaflop mark in the first instance, for other than research interest: already only few applications can actually exploit the scale of existing infrastructures, let alone handle the scale of an exaflop machine if this means scale, rather than clock rate. We can distinguish in particular between embarrassingly parallel applications which benefit from the number of resources but have little requirements towards their interconnects and tightly coupled applications that are highly restricted by the interconnect limitations ? and the number of embarrassingly parallel applications, as well as their resource need, is typically limited itself. Problems that would really benefit from the scale in order to improve accuracy and speed of calculation also frequently expose an exponential resource need, or at least a growth by the power of n. This means that in order to reach an efficiency increment, it needs an exponential number of additional resources, i.e. a linear growth in the number of resources does not offer the increment in efficiency required.
Yet manufacturers stills struggle with essential problems on both hard- and software side to increase the efficiency of larger scale systems at all: in particular the limited scalability of the interconnect, memory etc. poses issues that reduce the effective performance of larger scales rather than increase it. Promising approaches employ a mix of different scalability and consistency models which however restrict the usage domain accordingly - in particular multi-level applications exploit such hybrid models, but their development is still difficult and very rarely supported.
This workshop focuses on the particular problems to increase scale and examines potential means to address these problems. It is thereby not restricted to the hardware side of problems, but addresses the full scope from hardware limitations over algorithm and computing theory to new means for application development and execution. The workshop aims at experts from all fields related to high performance computing, including manufacturers, system designers, compiler and operating system developers, application programmers and users etc. Depending on the number of submissions, this workshop will be broken into multiple strands according to topic, such as hardware, theory of computation and software development.

Last modified: 2011-12-21 10:49:54