WBDB 2014 - Fifth Workshop on Big Data Benchmarking
Topics/Call fo Papers
The Fifth Workshop on Big Data Benchmarking (5th WBDB) will be held on August 5-6, 2014 in Potsdam, Germany.
The objective of the WBDB workshops is to make progress towards development of industry standard application-level benchmarks for evaluating hardware and software systems for big data applications.
To be successful, a benchmark should be:
Simple to implement and execute;
Cost effective, so that the benefits of executing the benchmark justify its expense;
Timely, with benchmark versions keeping pace with rapid changes in the marketplace; and
Verifiable so that results of the benchmark can be validated via independent means.
Based on discussions at the previous big data benchmarking workshops, two benchmark proposals are currently under consideration. One called BigBench (to appear in ACM SIGMOD Conference 2013), is based on extending the Transaction Processing Performance Council's Decision Support benchmark (TPC-DS) with semi-structured and unstructured data and new queries targeted at those data. A second is based on a Deep Analytics Pipeline for event processing (see http://cc.readytalk.com/play?id=1hws7t).
Topics
To make progress towards a big data benchmarking standard, the workshop will explore a range of issues including:
Data features: New feature sets of data including, high-dimensional data, sparse data, event-based data, and enormous data sizes.
System characteristics: System-level issues including, large-scale and evolving system configurations, shifting loads, and heterogeneous technologies for big data and cloud platforms.
Implementation options: Different implementation options such as SQL, NoSQL, Hadoop software ecosystem, and different implementations of HDFS.
Workload: Representative big data business problems and corresponding benchmark implementations. Specification of benchmark applications that represent the different modalities of big data, including graphs, streams, scientific data, and document collections.
Hardware options: Evaluation of new options in hardware including different types of HDD, SSD, and main memory, and large-memory systems, and new platform options that include dedicated commodity clusters and cloud platforms.
Synthetic data generation: Models and procedures for generating large-scale synthetic data with requisite properties.
Benchmark execution rules: E.g. data scale factors, benchmark versioning to account for rapidly evolving workloads and system configurations, benchmark metrics.
Metrics for efficiency: Measuring the efficiency of the solution, e.g. based on costs of acquisition, ownership, energy and/or other factors, while encouraging innovation and avoiding benchmark escalations that favor large inefficient configuration over small efficient configurations.
Evaluation frameworks: Tool chains, suites and frameworks for evaluating big data systems.
Early implementations: Of the Deep Analytics Pipeline or BigBench and lessons learned in benchmarking big data applications.
Enhancements: Proposals to augment these benchmarks, e.g. by adding more data genres (e.g. graphs), or incorporating a range of machine learning and other algorithms, will be entertained and are encouraged.
The objective of the WBDB workshops is to make progress towards development of industry standard application-level benchmarks for evaluating hardware and software systems for big data applications.
To be successful, a benchmark should be:
Simple to implement and execute;
Cost effective, so that the benefits of executing the benchmark justify its expense;
Timely, with benchmark versions keeping pace with rapid changes in the marketplace; and
Verifiable so that results of the benchmark can be validated via independent means.
Based on discussions at the previous big data benchmarking workshops, two benchmark proposals are currently under consideration. One called BigBench (to appear in ACM SIGMOD Conference 2013), is based on extending the Transaction Processing Performance Council's Decision Support benchmark (TPC-DS) with semi-structured and unstructured data and new queries targeted at those data. A second is based on a Deep Analytics Pipeline for event processing (see http://cc.readytalk.com/play?id=1hws7t).
Topics
To make progress towards a big data benchmarking standard, the workshop will explore a range of issues including:
Data features: New feature sets of data including, high-dimensional data, sparse data, event-based data, and enormous data sizes.
System characteristics: System-level issues including, large-scale and evolving system configurations, shifting loads, and heterogeneous technologies for big data and cloud platforms.
Implementation options: Different implementation options such as SQL, NoSQL, Hadoop software ecosystem, and different implementations of HDFS.
Workload: Representative big data business problems and corresponding benchmark implementations. Specification of benchmark applications that represent the different modalities of big data, including graphs, streams, scientific data, and document collections.
Hardware options: Evaluation of new options in hardware including different types of HDD, SSD, and main memory, and large-memory systems, and new platform options that include dedicated commodity clusters and cloud platforms.
Synthetic data generation: Models and procedures for generating large-scale synthetic data with requisite properties.
Benchmark execution rules: E.g. data scale factors, benchmark versioning to account for rapidly evolving workloads and system configurations, benchmark metrics.
Metrics for efficiency: Measuring the efficiency of the solution, e.g. based on costs of acquisition, ownership, energy and/or other factors, while encouraging innovation and avoiding benchmark escalations that favor large inefficient configuration over small efficient configurations.
Evaluation frameworks: Tool chains, suites and frameworks for evaluating big data systems.
Early implementations: Of the Deep Analytics Pipeline or BigBench and lessons learned in benchmarking big data applications.
Enhancements: Proposals to augment these benchmarks, e.g. by adding more data genres (e.g. graphs), or incorporating a range of machine learning and other algorithms, will be entertained and are encouraged.
Other CFPs
Last modified: 2014-03-25 20:40:33