AASC 2016 - International Workshop on Architecture-Aware Simulation and Computing (AASC 2016)
Date2016-07-18 - 2016-07-22
Deadline2016-03-07
VenueInnsbruck, Austria
Keywords
Websitehttps://hpcs2016.cisedu.info
Topics/Call fo Papers
To keep pace with the performance increase predicted by Moore's Law, homogeneous/heterogeneous processor aggregates have been extensively adopted both at the High Performance Computing (HPC) and at the embedded system domains. However, such steadily increase (often attained by simply scaling the number of computing cores within a chip) has been threatened by several architectural and technological constraints: limited parallelization opportunities (either at data, task, or even at instruction level); reduced memory throughput and complex cache hierarchies; limited communication bandwidth; thermal, power and energy constraints, etc.
On the other hand, the prevailing heterogeneous computing architectures, often integrating different types of coprocessors (e.g., Intel Xeon Phi) and accelerator components (e.g., GPUs, FPGAs, etc.) has introduced complex challenges to efficiently program and implement HPC applications.
On the embedded domain, different compromises to cope with strict energy efficiency requirements have been demanded. Despite the several different approaches that have been considered either at the processor architecture level (e.g., ARM big.LITTLE clusters) or at the coprocessor/accelerator level (e.g., mobile GPUs, reconfigurable SoCs, etc.), complex challenges are still posed in order to attain the ever increasing performance levels that have also been claimed in this specific domain.
Therefore, it is widely recognized that next generation HPC and embedded systems can only benefit from the hardware’s full potential if both processor and architecture features are taken into account at all development stages - from the early algorithmic design to the final implementation stage.
The AASC workshop strives to address all aspects related to these issues, including, but not limited to:
Hardware-aware compute/memory-intensive simulations of real-world problems in computational science and engineering domains (e.g., applications in electrical, mechanical, physics, geological, biological, or medical engineering).
Architecture-aware approaches for large-scale parallel computing, including scheduling, load-balancing and scalability studies.
Architecture-aware parallelization on HPC platforms, including multi-/many-core architectures comprising coprocessor/accelerator components (e.g., Intel Xeon Phi, GPUs, FPGAs, etc.).
Architecture-aware approaches for energy-efficient implementations of HPC or embedded applications (e.g., ARM big.LITTLE, mobile GPUs, reconfigurable SoCs, etc.).
Programming models and tool support for parallel heterogeneous platforms (e.g., CUDA, OpenCL, OpenACC, etc.).
Software engineering, code optimization, and code generation strategies for parallel systems with multi-/many-core processors.
Performance and memory optimization tools and techniques (including cache optimization, data reuse, data streaming, etc.) for parallel systems with multi-core processors.
On the other hand, the prevailing heterogeneous computing architectures, often integrating different types of coprocessors (e.g., Intel Xeon Phi) and accelerator components (e.g., GPUs, FPGAs, etc.) has introduced complex challenges to efficiently program and implement HPC applications.
On the embedded domain, different compromises to cope with strict energy efficiency requirements have been demanded. Despite the several different approaches that have been considered either at the processor architecture level (e.g., ARM big.LITTLE clusters) or at the coprocessor/accelerator level (e.g., mobile GPUs, reconfigurable SoCs, etc.), complex challenges are still posed in order to attain the ever increasing performance levels that have also been claimed in this specific domain.
Therefore, it is widely recognized that next generation HPC and embedded systems can only benefit from the hardware’s full potential if both processor and architecture features are taken into account at all development stages - from the early algorithmic design to the final implementation stage.
The AASC workshop strives to address all aspects related to these issues, including, but not limited to:
Hardware-aware compute/memory-intensive simulations of real-world problems in computational science and engineering domains (e.g., applications in electrical, mechanical, physics, geological, biological, or medical engineering).
Architecture-aware approaches for large-scale parallel computing, including scheduling, load-balancing and scalability studies.
Architecture-aware parallelization on HPC platforms, including multi-/many-core architectures comprising coprocessor/accelerator components (e.g., Intel Xeon Phi, GPUs, FPGAs, etc.).
Architecture-aware approaches for energy-efficient implementations of HPC or embedded applications (e.g., ARM big.LITTLE, mobile GPUs, reconfigurable SoCs, etc.).
Programming models and tool support for parallel heterogeneous platforms (e.g., CUDA, OpenCL, OpenACC, etc.).
Software engineering, code optimization, and code generation strategies for parallel systems with multi-/many-core processors.
Performance and memory optimization tools and techniques (including cache optimization, data reuse, data streaming, etc.) for parallel systems with multi-core processors.
Other CFPs
- International Workshop on High Performance Dynamic Reconfigurable Systems and Networks (DRSN 2016)
- International Workshop on High Performance Computing Systems for Biomedical, Bioinformatics and Life Sciences (BILIS 2016)
- 3rd International Workshop on High Performance Computing for Weather, Climate, and Solid Earth Sciences (HPC-WCES 2016)
- 7th International Workshop on Machine Learning, Pattern Recognition & Applications (MLPRA 2016)
- International Workshop on Parallel Evolutionary Computation (PEC 2016)
Last modified: 2016-01-16 23:28:41