ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

ARTEMIS 2013 - 4th ACM/IEEE ARTEMIS 2013 International Workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Streams

Date2013-10-21

Deadline2013-06-05

VenueBarcelona , Spain Spain

Keywords

Websitehttps://acmmm13.org/program/workshops/

Topics/Call fo Papers

Cognitive video supervision and event analysis in video sequences is a critical task in many multimedia applications. Methods, tools and algorithms that aim to detect and recognize high level concepts and their respective spatio-temporal and causal relations in order to identify semantic video activities, actions and procedures have been in the focus of the research community over the last years.
This research area has strong impact on many real-life multimedia applications based on a semantic characterization and annotation of video streams in various domains (e.g., sports, news, documentaries, movies and surveillance), either broadcast or user-generated videos. Although a first critical issue is the estimation of quantitative parameters describing where events are detected, recent trends are facing the analysis of multimedia footage by applying image and video understanding techniques to that detected/tracked motion.
That is, the challenge is becoming the generation of qualitative descriptions about the meaning of (human) motion, therefore describing not only where, but also why an event is being observed.
Towards this end multiple key-topics should be tackled, such as: object and agent detection; audio-visual tracking of people; scene and region segmentation and categorization; motion-based concept formation; event vs. context interpretation and reasoning; video browsing, indexing and retrieval; etc. The traditional approaches for event detection in videos assume well-structured environments and they fail to operate (i) in large-scale databases like Internet or (ii) in a largely unsupervised way, under conditions from those on which cognitive systems have been trained. Another drawback of current methods is the fact that they (iii) focus on narrow domains using specific concept detectors such as “human faces”, “cars”, or “buildings”.
This workshop seeks original high innovative research in the area of cognitive systems devoted to image and video understanding in multiple domains. The goals of this workshop are: (i) fundamental research in the area of multimedia, in the scope of detecting/identifying high level concepts, actions, events and procedures in video streams; (ii) robust solutions to targeted problems of high impact in real-life multimedia applications; and (iii) ongoing research/progress on national and international research projects.
Papers of this workshop are encouraged to address a wide range of topics related to image and video understanding for event detection in video streams; these topics include (but are not limited to):
Ontology-based event and human motion mining, indexing, browsing and retrieval;
Methods for robust detection of semantic concepts in video streams;
Annotation of events and human motion and activity in large-scale multimedia content;
Semantic and event-based summarization, matching and retrieval of monitored video footage;
Identification of spatio-temporal, causal and contextual relations of human events;
Enhancement of events analysis based on attention models or multiscale/multisource data fusions;
Event- and context-oriented relevance feedback algorithms;
Strategies for context learning (background scene and its regions, objects and agents);
Scene, region and object categorization in human-populated scenarios;
Research projects in the respective fields (international standardization activities, national/international research projects).

Last modified: 2013-03-23 13:38:11