SWEET 2014 - 3rd International Workshop on Scalable Workflow Enactment Engines and Technologies
Topics/Call fo Papers
One of the goals of computer system engineering has always been to develop systems that are easy to use and understand, but at the same time put great computational power at the fingertips of the end users. The developments in Big data processing where this is made more accessible for programmers and non-programmers create the potential for making this a realistic goal in the area of data analytics and scientific data processing, by enabling simple access to large pools of data storage and computational resources. More specifically, the creation of Big data processing frameworks with declarative workflow languages is facilitating the convergence of data-intensive workflow-based processing with traditional data management, thereby providing users with the best of both worlds. Workflows are used extensively, both for data analytics and in computational science. Common to the broad range of workflow systems currently in use are their relatively simple programming models, which are usually exposed through a visual programming style, and are backed by a well-defined model of computation. While the flexibility of workflows for rapid prototyping of science pipelines makes them appealing to computational scientists, recent applications of workflow technology to data-intensive science shows the need for a robust underlying data management infrastructure. At the same time, on the data management side of science and data analytics, workflow-like models and languages are beginning to emerge, to make it possible for users with no application development resources but close to the data domain, to assemble complex data processing pipelines.
Workshop Focus and Goals
The SWEET 2014 workshop is the third in a series that began with SWEET 2012. The original idea for the workshop comes from the observation that rapid progress in models and patterns for cloud computing is facilitating a new generation of hybrid database / workflow systems for addressing large data processing problems in a scalable way.
The collection of papers and talks from SWEET has so far confirmed that such hybrids are emerging not only in e-science but also for Web-scale data processing, i.e., at Google, Yahoo, and Twitter. At the same time, the core SWEET'13 papers report on robust research on core workflow features, including scheduling, distributed engines, and workflows for HPC architectures. With this in mind, the goal of the SWEET'14 workshop is to bring together researchers and practitioners to explore the potential of scalable processing of large data sets with applications organized as workflows. Following our previous calls we have identified the following specific areas of interest:
Architectures and performance: convergence of data processing pipeline and workflow processing, associated architectural and performance issues;
Best practices: best practices in data-intensive workflow models and programming paradigms for designing efficient, effective and reusable pipelines;
Usability: lowering barriers to entry into programming data-intensive pipelines, by for example offering declarative and/or graphical interfaces that assist actively in the design process.
Topics
The workshop aims to address issues of (i) Architectures and performance, (ii) Models and Languages, (iii) Applications of cloud-based workflows. The topics of the workshop include, but as usual, are not strictly limited to:
Architectures and performance:
architectures for data processing pipelines, data-intensive workflows, DAGs of MapReduce jobs, dataflows, and data-mashups,
cloud-based, scalable workflow enactment,
efficient data storage for data-intensive workflows,
optimizing execution of data-intensive workflows,
workflow scheduling in cloud computing.
Modelling for performance as well as usability:
languages for data processing pipelines, data-intensive workflows, dataflows, and data-mashups,
verification and validation of data-intensive workflows,
programming models for cloud computing,
access control and authorization models, privacy, security, risk and trust issues,
workflow patterns for data-intensive workflows,
interfaces for supporting the design and debugging of complex data-processing pipelines and workflows,
tools for supporting communities for exchanging data-processing pipelines and workflows.
Applications of cloud-based workflow:
big data analytics,
bioinformatics,
data mashups,
semantic web data management,
data-driven journalims.
Workshop Focus and Goals
The SWEET 2014 workshop is the third in a series that began with SWEET 2012. The original idea for the workshop comes from the observation that rapid progress in models and patterns for cloud computing is facilitating a new generation of hybrid database / workflow systems for addressing large data processing problems in a scalable way.
The collection of papers and talks from SWEET has so far confirmed that such hybrids are emerging not only in e-science but also for Web-scale data processing, i.e., at Google, Yahoo, and Twitter. At the same time, the core SWEET'13 papers report on robust research on core workflow features, including scheduling, distributed engines, and workflows for HPC architectures. With this in mind, the goal of the SWEET'14 workshop is to bring together researchers and practitioners to explore the potential of scalable processing of large data sets with applications organized as workflows. Following our previous calls we have identified the following specific areas of interest:
Architectures and performance: convergence of data processing pipeline and workflow processing, associated architectural and performance issues;
Best practices: best practices in data-intensive workflow models and programming paradigms for designing efficient, effective and reusable pipelines;
Usability: lowering barriers to entry into programming data-intensive pipelines, by for example offering declarative and/or graphical interfaces that assist actively in the design process.
Topics
The workshop aims to address issues of (i) Architectures and performance, (ii) Models and Languages, (iii) Applications of cloud-based workflows. The topics of the workshop include, but as usual, are not strictly limited to:
Architectures and performance:
architectures for data processing pipelines, data-intensive workflows, DAGs of MapReduce jobs, dataflows, and data-mashups,
cloud-based, scalable workflow enactment,
efficient data storage for data-intensive workflows,
optimizing execution of data-intensive workflows,
workflow scheduling in cloud computing.
Modelling for performance as well as usability:
languages for data processing pipelines, data-intensive workflows, dataflows, and data-mashups,
verification and validation of data-intensive workflows,
programming models for cloud computing,
access control and authorization models, privacy, security, risk and trust issues,
workflow patterns for data-intensive workflows,
interfaces for supporting the design and debugging of complex data-processing pipelines and workflows,
tools for supporting communities for exchanging data-processing pipelines and workflows.
Applications of cloud-based workflow:
big data analytics,
bioinformatics,
data mashups,
semantic web data management,
data-driven journalims.
Other CFPs
- 2014 International Conference on Social, Education and Sports(ICSES 2014)
- Second International Conference of Database and Data Mining (DBDM 2014)
- Software Defined Mobile Networks (SDMN) : Beyond LTE Network Architecture
- Conference on Information Reuse and Integration
- International Conference on Diamond and Carbon Materials 2014
Last modified: 2013-12-09 23:07:12