OMHI 2015 - International Workshop on On-chip memory hierarchies and interconnects: organization, management and implementation
Topics/Call fo Papers
Performance of current chip multiprocessors (CMPs), either consisting of only CPU cores or heterogeneous CPU/GPGPU processors, is mainly dominated by data access latencies and bandwidth constraints. To alleviate this problem, current multi-core and many-core processors include high amounts of on-chip memory storage, organized either as caches or main memory. Cores fetch the requested data by traversing an on-chip interconnects. Latencies are mainly affected by the devised on-chip memory hierarchy and the interconnect design, whose latencies can dramatically grow with the number of cores. This problem aggravates as the core counts increases. Thus, new cache hierarchies and interconnects organizations are required to address this problem.
These on-chip cache hierarchies have been typically built mainly employing Static Random Access Memory (SRAM) technology, which is the fastest existing electronic memory technology. This technology presents important design challenges in terms of density and high leakage currents, so that it is unlikely the implementation of future cache hierarchies with only SRAM technology, especially in the context of multi- and many-core processors. Instead, alternative technologies (e.g. eDRAM or STTRAM) addressing leakage and density are being explored in large CMPs. This fact enables the design of alternative on-chip hierarchies. Finally, to take advantage of these complex hierarchies, efficient management is required. This includes, among others, thread allocation policies, cache management strategies, and the NoC design, both in 2D and 3D designs.
This workshop will provide a forum for engineers and scientists to address challenges, and to present new ideas for on-chip memory hierarchies and interconnects focusing on organization, management and implementation.
Authors are invited to submit high quality papers representing original work in (but not limited to) the following topics:
? On-chip memory hierarchy organizations: homogeneous and heterogeneous technologies, including persistent memories.
? On-chip memory management: prefetching, replacement algorithms, data replication and promotion.
? Thread allocation to cores, scheduling, workload balancing and programming.
? Coherence problems in heterogeneous GPGPUs as tile-based CMPs.
? Cache hierarchy/coherence protocol/network co-design.
? Cache-aware performance studies for real applications and programmability issues.
? Efficient network design with emergent technologies (photonics and wireless).
? Power and energy management.
? Tradeoffs among performance, energy and area.
? Moving data among on-chip and off-chip memories.
? 3D-stacked memory organizations.
These on-chip cache hierarchies have been typically built mainly employing Static Random Access Memory (SRAM) technology, which is the fastest existing electronic memory technology. This technology presents important design challenges in terms of density and high leakage currents, so that it is unlikely the implementation of future cache hierarchies with only SRAM technology, especially in the context of multi- and many-core processors. Instead, alternative technologies (e.g. eDRAM or STTRAM) addressing leakage and density are being explored in large CMPs. This fact enables the design of alternative on-chip hierarchies. Finally, to take advantage of these complex hierarchies, efficient management is required. This includes, among others, thread allocation policies, cache management strategies, and the NoC design, both in 2D and 3D designs.
This workshop will provide a forum for engineers and scientists to address challenges, and to present new ideas for on-chip memory hierarchies and interconnects focusing on organization, management and implementation.
Authors are invited to submit high quality papers representing original work in (but not limited to) the following topics:
? On-chip memory hierarchy organizations: homogeneous and heterogeneous technologies, including persistent memories.
? On-chip memory management: prefetching, replacement algorithms, data replication and promotion.
? Thread allocation to cores, scheduling, workload balancing and programming.
? Coherence problems in heterogeneous GPGPUs as tile-based CMPs.
? Cache hierarchy/coherence protocol/network co-design.
? Cache-aware performance studies for real applications and programmability issues.
? Efficient network design with emergent technologies (photonics and wireless).
? Power and energy management.
? Tradeoffs among performance, energy and area.
? Moving data among on-chip and off-chip memories.
? 3D-stacked memory organizations.
Other CFPs
- Workshop on Large Scale Distributed Virtual Environments on Clouds and P2P
- International Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms
- Workshop on Dependability and Interoperability in Heterogeneous Clouds
- 4th Workshop on Big Data Management in Clouds
- 10th Conference on Telecommunications
Last modified: 2015-03-12 23:45:47