ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

HighStream 2017 - HighStream’2017 High-Performance Data Stream Mining Workshop

Date2017-11-18 - 2017-11-21

Deadline2017-08-07

VenueNew Orleans, USA - United States USA - United States

Keywords

Websitehttps://highstream17.github.io

Topics/Call fo Papers

Learning from data streams have emerged as one of the most vital topic in contemporary machine learning and data stream mining. They encompass several challenges for modern intelligent systems: potentially unbounded volume of data, instances arriving at high speed in varying intervals, changing and evolving decision space, difficulties with access to ground truth, as well as need for managing heterogeneous forms of information. Volume and velocity are difficult tasks to handle on their own, yet they need to be considered from a perspective of non-stationary problems affected with a phenomenon known as concept drift. This problem has been thoroughly studied in the last decade with a specific focus on classification tasks. However, the research community has started to address this problem within other contexts such as data preprocessing, regression, multi-label classification, association rule mining, imbalanced learning, graph and xml mining, social and mobile networks, as well as novelty detection. It is now recognized that imbalanced domains are a broader and important problem posing relevant challenges for both supervised and unsupervised learning tasks, with handling various embedded difficulties in an increasing number of real world applications.
Tackling the issues raised by data stream mining is of high importance to people from both academia and industry. For researchers, these challenges offer an exciting option to develop adapting, evolving and efficient learning methods that will be able to handle such difficult cases. For industry, many of real problems to be faced actually arrive in form of streams, thus such methods are vital for tackling these tasks. They require methods that enable a more preemptive, real-time action in an increasingly fast-paced world and are able to constantly update and evolve knowledge and models in accordance with the current state of data. Additionally, with the ever-increasing scale and complexity of these problems, we need high-performance computing environments (clusters, cloud computing, GPUs) and fast, incremental, ideally single-pass algorithms to offer highest possible predictive power at lowest time and computational cost.

Last modified: 2017-05-13 11:36:34