ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

BigData 2013 - Fourth Workshop on Big Data Benchmarking

Date2013-10-06 - 2013-10-09

Deadline2013-07-30

VenueCalifornia , USA - United States USA - United States

Keywords

Websitehttps://www.ischool.drexel.edu/bigdata/bigdata2013

Topics/Call fo Papers

The objective of the WBDB workshops is to make progress towards development of industry standard application-level benchmarks for evaluating hardware and software systems for big data applications.
To be successful, a benchmark should be:
Simple to implement and execute;
Cost effective, so that the benefits of executing the benchmark justify its expense;
Timely, with benchmark versions keeping pace with rapid changes in the marketplace; and
Verifiable so that results of the benchmark can be validated via independent means.
Based on discussions at the previous big data benchmarking workshops, two benchmark proposals are currently under consideration. One called BigBench (to appear in ACM SIGMOD Conference 2013), is based on extending the Transaction Processing Performance Council's Decision Support benchmark (TPC-DS) with semi-structured and unstructured data and new queries targeted at those data. A second is based on a Deep Analytics Pipeline for event processing (see http://cc.readytalk.com/play?id=1hws7t).
Topics
To make progress towards a big data benchmarking standard, the workshop will explore a range of issues including:
Data features: New feature sets of data including, high-dimensional data, sparse data, event-based data, and enormous data sizes.
System characteristics: System-level issues including, large-scale and evolving system configurations, shifting loads, and heterogeneous technologies for big data and cloud platforms.
Implementation options: Different implementation options such as SQL, NoSQL, Hadoop software ecosystem, and different implementations of HDFS.
Workload: Representative big data business problems and corresponding benchmark implementations. Specification of benchmark applications that represent the different modalities of big data, including graphs, streams, scientific data, and document collections.
Hardware options: Evaluation of new options in hardware including different types of HDD, SSD, and main memory, and large-memory systems, and new platform options that include dedicated commodity clusters and cloud platforms.
Synthetic data generation: Models and procedures for generating large-scale synthetic data with requisite properties.
Benchmark execution rules: E.g. data scale factors, benchmark versioning to account for rapidly evolving workloads and system configurations, benchmark metrics.
Metrics for efficiency: Measuring the efficiency of the solution, e.g. based on costs of acquisition, ownership, energy and/or other factors, while encouraging innovation and avoiding benchmark escalations that favor large inefficient configuration over small efficient configurations.
Evaluation frameworks: Tool chains, suites and frameworks for evaluating big data systems.
Early implementations: Of the Deep Analytics Pipeline or BigBench and lessons learned in benchmarking big data applications.
Enhancements: Proposals to augment these benchmarks, e.g. by adding more data genres (e.g. graphs), or incorporating a range of machine learning and other algorithms, will be entertained and are encouraged.
Related Links
First WBDB, May 2012, San Jose, CA.
Second WBDB, December 2012, Pune, India.
Third WBDB, July 2013, Xi'an, China.

Last modified: 2013-05-10 23:56:22