ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

PFSW 2014 - 1st Programmable File Systems Workshop

Date2014-06-23 - 2014-06-27

Deadline2014-02-21

VenueVancouver, Canada Canada

Keywords

Websitehttp://users.soe.ucsc.edu/~carlosm/PFSW/Home.html

Topics/Call fo Papers

A major milestone in the evolution of digital computers was the development of the stored-program concept and the design of Turing-complete machines as opposed to fixed-program computers. Yet, we still treat an increasingly important subsystem of computers largely as a fixed-program computer: file and storage systems. Among the key reasons for this history is the justified fear that (1) any interface changes in file and storage systems will make legacy data inaccessible and locks the data to a particular system and (2) programmability will increase the probability of data loss.
Yet with the advent of open source file systems a new usage pattern emerges: users isolate subsystems of these file systems and put them in contexts not foreseen by original designers. Examples are: (1) an object-based storage back end gets a new RESTful front end to become a Amazon Web Service's S3 compliant key value store, (2) a data placement function is used as a placement function for customer accounts, and (3) the HDF5 scientific data access library is embedded into parallel storage systems. This trend shows a desire for the ability to use existing file system services and compose them to implement new services ? a desire, however, that is frequently stumped by the difficulty of bringing new services of advanced functionality up to production quality and sufficiently low probability of data loss. At the same time government and industry are heavily investing into the development of new, extremely scalable, and highly efficient, distributed I/O stacks that largely abandon traditional file and storage system interfaces.
Designing programmability into file and storage systems has the following benefits: (1) we are achieving greater separation of storage performance engineering from storage reliability engineering, making it possible to optimize storage systems in a wide variety of ways without risking years of investments into code hardening; (2) we are creating an environment that encourages people to create a new stack of storage systems abstractions, both domain-specific and across domains, including sophisticated optimizers that rely on machine learning techniques; (3) we are informing commercial parallel file system vendors on the design of low-level APIs for their products so that they match the versatility of open source storage systems without having to release their entire code into open source; and (4) we are using this historical opportunity to leverage the tension between the versatility of open source storage systems and the reliability of proprietary systems to lead the community of storage system designers.
Goal
This one-day workshop focusses on frameworks that allow the programmability of file and storage systems while addressing the risks of data interface change. The workshop aims to serve as a venue for leaders in the file system and storage community to exchange ideas outside the tradition of half a century of classic file and storage systems research which focussed on a small set of unchanging interfaces.
Topics
Addressing programmability of the non-volatile part of the memory hierarchy, the workshop seeks contributions on relevant topics, included but not limited to:
?Programming models
?Data interface change management and isolation
?Interface metadata management and propagation
?Compile-time and runtime storage optimization
?Data and task placement in large-scale storage stack
?Local and distributed performance management and isolation
?Nonstop storage system evolution
Program Committee
(still waiting for additional confirmations)
John Bent, EMC
Randal Burns, Johns Hopkins University
Yong Chen, Texas Tech University
Maya Gokhale, Lawrence Livermore National Laboratory
Gary Grider, Los Alamos National Laboratory
Dean Hildebrand, IBM Almaden
Dries Kimpe, Argonne National Laboratory
Scott Klasky, Oak Ridge National Laboratory
Jay Lofstead, Sandia National Laboratory
Barney Maccabe, Oak Ridge National Laboratory
Carlos Maltzahn, University of California at Santa Cruz
Adam Manzanares, HGST
Pat McCormick, Los Alamos National Laboratory
Sage Weil, Inktank Storage

Last modified: 2014-01-26 11:27:31