ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

ParLearning 2013 - Workshop on Parallel and Distributed Computing for Machine Learning and Inference Problems (ParLearning'2013)

Date2013-05-24

Deadline2013-01-01

VenueBoston, USA - United States USA - United States

Keywords

Websitehttps://cloud.hdu.edu.cn/hpdic2013/

Topics/Call fo Papers

Authors are invited to submit manuscripts of original unpublished research that demonstrate a strong interplay between parallel/distributed computing techniques and learning/inference applications, such as algorithm design and libraries/framework development on multicore/ manycore architectures, GPUs, clusters, supercomputers, cloud computing platforms that target applications including but not limited to:
Learning and inference using large scale Bayesian Networks
Scaling up frequent subgraph mining or other graph pattern mining techniques
Scalable implementations of learning algorithms for massive sparse datasets
Scalable clustering of massive graphs or graph streams
Scalable algorithms for topic modeling
HPC enabled approaches for emerging trend detection in social media
Comparison of various HPC infrastructures for learning
GPU-accelerated implementations for topic modeling or other text mining problems
Knowledge discovery from scientific applications with massive datasets (climate, systems biology etc.)
Performance analysis of key machine-learning algorithms from newer parallel and distributed computing frameworks
Apache Mahout, Apache Giraph, IBM Parallel Learning Toolbox, GraphLab etc.
Domain-specific languages for Parallel Computation
GPU-integration for Java/Python
Submission Details
Submitted manuscripts may not exceed 8 single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references. More format requirements will be posted on the IPDPS web page (www.ipdps.org). All papers must be submitted through the EDAS portal (check back later for URL).
In addition to regular papers, we also encourage shorter 4 or 6-page papers describing work-in-progress.

Last modified: 2012-09-21 23:24:30