BigGraph 2014 - International Workshop on High Performance Big Graph Data Management, Analysis, and Mining
Topics/Call fo Papers
Modern Big Data increasingly appears in the form of complex graphs and networks. Examples include the physical Internet, the world wide web, online social networks, phone networks, and biological networks. In addition to their massive sizes, these graphs are dynamic, noisy, and sometimes transient. They also conform to all five Vs (Volume, Velocity, Variety, Value and Veracity) that define Big Data. However, many graph-related problems are computationally difficult, and thus big graph data brings unique challenges, as well as numerous opportunities for researchers, to solve various problems that are significant to our communities.
Big graph problems are currently solved using several complementary paradigms. The most popular approach is perhaps by exploiting parallelism, through specialized algorithms for supercomputers, shared-memory multicore and manycore systems, and heterogeneous CPU-GPU systems. However, since real-world graphs are sparse and highly irregular, there are very few parallel implementations that can actually deliver high performance. The major challenges to scaling and efficiency include irregular data dependencies, poor locality, and high synchronization costs of current approaches. In addition to parallelism, researchers are developing approximation algorithms that use sampling for compressing and summarizing graph data. Streaming algorithms are also being considered for scenarios where the rate of updates is too fast to process the entire graph in a single pass. Further, out-of-core algorithms are necessary for massive graphs that do not fit in the main memory of a typical system. Researchers can use graph-based solutions for solving problems from many diverse disciplines, including routing and transportation, social networks, bioinformatics, computational science, health care, security and intelligence analysis.
This workshop aims to bring together researchers from different paradigms solving big graph problems under a unified platform for sharing their work and exchanging ideas. We are soliciting novel and original research contributions related to big graph data management, analysis, and mining (algorithms, software systems, applications, best practices, performance). Significant work-in-progress papers are also encouraged. Papers can be from any of the following areas including but not limited to:
● Parallel algorithms for big graph analysis on HPC systems
● Heterogeneous CPU-GPU solutions to solve big graph problems
● Extreme-scale computing for large graph, tensor, and network problems
● Sampling and summarization of large graphs
● Graph algorithms for large-scale scientific computing problems
● Graph clustering, partitioning, and classification methods
● Scalable graph topology measurement: diameter approximation, eigenvalues, triangle and graphlet counting
● Parallel algorithms for computing graph kernels
● Inference on Large graph data
● Graph evolution and dynamic graph models
● Graph databases, novel querying and indexing strategies for RDF data
● Novel applications of big graph problems in bioinformatics, health care, security, and social networks
● New software systems and runtime systems for big graph data mining
Papers should be at maximum 8 pages long, formatted using the style of Big-Data 2014 conference proceedings. Paper in PDF format can be sent to any of the program organizers by email by 11:59 pm PDT (Pacific Daylight Time) on the paper submission deadline.
Big graph problems are currently solved using several complementary paradigms. The most popular approach is perhaps by exploiting parallelism, through specialized algorithms for supercomputers, shared-memory multicore and manycore systems, and heterogeneous CPU-GPU systems. However, since real-world graphs are sparse and highly irregular, there are very few parallel implementations that can actually deliver high performance. The major challenges to scaling and efficiency include irregular data dependencies, poor locality, and high synchronization costs of current approaches. In addition to parallelism, researchers are developing approximation algorithms that use sampling for compressing and summarizing graph data. Streaming algorithms are also being considered for scenarios where the rate of updates is too fast to process the entire graph in a single pass. Further, out-of-core algorithms are necessary for massive graphs that do not fit in the main memory of a typical system. Researchers can use graph-based solutions for solving problems from many diverse disciplines, including routing and transportation, social networks, bioinformatics, computational science, health care, security and intelligence analysis.
This workshop aims to bring together researchers from different paradigms solving big graph problems under a unified platform for sharing their work and exchanging ideas. We are soliciting novel and original research contributions related to big graph data management, analysis, and mining (algorithms, software systems, applications, best practices, performance). Significant work-in-progress papers are also encouraged. Papers can be from any of the following areas including but not limited to:
● Parallel algorithms for big graph analysis on HPC systems
● Heterogeneous CPU-GPU solutions to solve big graph problems
● Extreme-scale computing for large graph, tensor, and network problems
● Sampling and summarization of large graphs
● Graph algorithms for large-scale scientific computing problems
● Graph clustering, partitioning, and classification methods
● Scalable graph topology measurement: diameter approximation, eigenvalues, triangle and graphlet counting
● Parallel algorithms for computing graph kernels
● Inference on Large graph data
● Graph evolution and dynamic graph models
● Graph databases, novel querying and indexing strategies for RDF data
● Novel applications of big graph problems in bioinformatics, health care, security, and social networks
● New software systems and runtime systems for big graph data mining
Papers should be at maximum 8 pages long, formatted using the style of Big-Data 2014 conference proceedings. Paper in PDF format can be sent to any of the program organizers by email by 11:59 pm PDT (Pacific Daylight Time) on the paper submission deadline.
Other CFPs
- IEEE International Workshop on Cloud-assisted Context-Aware and Cognitive Networks 2015
- 5th International Workshop on Human-Computer Interaction, Tourism and Cultural Heritage
- International Conference of Communication, Media, Technology and Design
- International Conference on Contemporary Issues in Education
- Advances in Mobile Computing and Communications: 4G and Beyond (5G)
Last modified: 2014-08-08 23:33:55