ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

JSSPP 2020 - 23rd Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP 2020)

Date2020-05-18

Deadline2020-01-27

VenueNew Orleans, Louisiana, USA - United States USA - United States

Keywords

Websitehttp://www.ipdps.org/ipdps2020

Topics/Call fo Papers

The JSSPP workshop addresses all scheduling aspects of parallel processing, including cloud, grid (HPC) as well as “mixed/hybrid” or otherwise specific systems.
Large parallel systems have been in production for 25 years, creating the need of scheduling for such systems. Since 1995, JSSPP provides a forum for the research and engineering community working in the area. Initially, parallel systems were very static, with machines built in fixed configurations, which would be wholesale replaced every few years. Similarly, much of the workload was static as well, consisting of parallel scientific jobs running on a fixed number of nodes. Systems were primarily managed via batch queues. The user experience was far from interactive; jobs could wait in queues for days or even weeks.
A little over 10 years ago, the emergence of large scale, interactive, web applications together with the massive virtualization began to drive the development of a new class of (cloud) systems and schedulers. These systems would use virtual machines and/or containers to run "services", which would essentially never terminate (unlike scientific jobs). This created systems and schedulers with vastly different properties. Moreover, the enormous demand for computing resources resulted in a commercial market of competing providers. At the same time, the increasing demands for more power and interactivity have driven scientific platforms in a similar direction, causing the lines between these platforms to blur.
Nowadays, parallel processing is much more dynamic and connected. Many workloads are interactive and make use of variable resources over time. Complex parallel infrastructures can now be built on the fly, using resources from different sources, provided with different prices and quality of services. Capacity planning became more proactive, where resources are acquired continuously, with the goal of staying ahead of demand. The interaction model between job and resource manager is shifting to one of negotiation, where they agree on resources, price, and quality of service. Also, “hybrid” systems are often used, where the (virtualized) infrastructure is hosting a mix of competing workloads/applications, each having its own resource manager that must be somehow co-scheduled. These are just a few examples of the open issues facing our field.
From its very beginning, JSSPP has strived to balance practice and theory in its program. This combination provides a rich environment for technical debate about scheduling approaches including both academic researchers as well as participants from industry.
Building on this tradition, JSSPP welcomes both regular papers as well as descriptions of Open Scheduling Problems (OSP) in large scale scheduling. Lack of real-world data often substantially hampers the ability of the research community to engage with scheduling problems in a way that has real world impact. Our goal in the OSP venue is to build a bridge between the production and research worlds, in order to facilitate direct collaborations and impact.
Call for Regular Papers
JSSPP solicits papers that address any of the challenges in parallel scheduling, including:
Design and evaluation of new scheduling approaches.
Performance evaluation of scheduling approaches, including methodology, benchmarks, and metrics.
Workloads, including characterization, classification, and modeling.
Consideration of additional constraints in scheduling systems, like job priorities, price, accounting, load estimation, and quality of service guarantees.
Impact of scheduling strategies on system utilization, application performance, user friendliness, cost efficiency, and energy efficiency.
Scaling and composition of very large scheduling systems.
Cloud provider issues: capacity planning, service level assurance, reliability.
Interaction between schedulers on different levels, like processor level as well as whole single- or even multi-owner systems
Interaction between applications/workloads, e.g., efficient batch job and container/VM co-scheduling within a single system, etc.
Experience reports from production systems or large scale compute campaigns.

Last modified: 2019-10-23 01:41:45