ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

MR2AMC 2018 - Multimodal Representation, Retrieval, and Analysis of Multimedia Content (MR2AMC) 2018

Date2018-04-10 - 2018-04-12

Deadline2017-12-17

VenueMiami, Florida, USA - United States USA - United States

Keywords

Websitehttps://www.mr2amc.multimodalinteraction.com

Topics/Call fo Papers

Multimodal Representation, Retrieval, and Analysis of Multimedia Content (MR2AMC) is the IEEE Multimedia Information Processing and Retrieval (MIPR) workshop series on the understanding of multimodal multimedia content. MR2AMC aims to provide an international forum for researchers in the field of multimedia data processing, analysis, search, mining, and management leveraging multimodal information. This workshop will provide a forum to researchers and practitioners from both academia and industry for original research contributions and practical system design, implementation, and applications of multimodal multimedia information processing, mining, representation, management, and retrieval. The broader context of the workshop comprehends Web mining, AI, Semantic Web, multimedia information retrieval, event understanding, and natural language processing. For more information, write to mr2amc.group-AT-gmail.com
Rationale
The presence of social media platforms creates an abundance of multimedia content on the web due to advancements in digital devices and affordable network infrastructures. It has enabled anyone with an Internet connection to easily create and share their ideas, opinions, updates, and preferences through multimedia content with millions of other people around the world. Thus, it necessitates novel techniques for an efficient processing, analysis, mining, and management of multimedia data to provide different multimedia-related services. Such techniques should also able to search and retrieve information from within multimedia content. Since much signi cant contextual information such as spatial, temporal, and crowd-sourced information is also available in addition to the multimedia content, it is very important to leverage multimodal information because different representations represent different knowledge structures. However, decoding such knowledge structures into useful knowledge from a huge amount of multimedia content is very complex due to several reasons. Till date, the most of the semantic analysis, sentiment analysis, multimedia representation, multimedia information search and retrieval, opinion mining, and event understanding engines work in the unimodal setting. There is very limited work done which use multimodal information for these tasks. In this light, this workshop will focus on the use of multimodal information to analyze, represent, mine and manage multimedia content to support several semantic and sentiment based multimedia analytics problems. It will also focus on interesting multimedia systems that build upon semantic and sentiment information derived from multimedia data.
Accepted papers of MR2AMC 2018 will be published as part of the workshop proceedings in the IEEE Digital Library. Extended version of the accepted workshop papers will be invited for publication in Springer Cognitive Computation and IEEE Computational Intelligence Magazine (whichever macthes closely with papers).
Topics of Interest
The primary goal of the proposed workshop is to investigate whether multimedia content when fused with other modalities (e.g., contextual, crowd-source, and relationship information) can enhance the performance of unimodal (e.g., when only multimedia content) multimedia systems. The broader context of the workshop comprehends Multimedia Information Processing (e.g., Natural Language Processing, Image Processing, Speech Processing, and Video Processing), Multimedia Embedding (e.g., Word Embedding and Image Embedding), Web Mining, Machine Learning, Deep Neural Networks, and AI. Topics of interest include but are not limited to:
Multimodal Multimedia Search, Retrieval and Recommendation
Multimodal Personalized Multimedia Retrieval and Recommendation
Multimodal Event Detection, Recommendation, and Understanding
Multimodal Multimedia based FAQ and QA Systems
Multimodal based Diverse Multimedia Search, Retrieval and Recommendation
Multimodal Multimedia Content Analysis
Multimodal Semantic and Sentiment based Multimedia Analysis
Multimodal Semantic and Sentiment based Multimedia Annotation
Multimodal Semantic-based Multimedia Retrieval and Recommendation
Multimodal Sentiment-based Multimedia Retrieval and Recommendation
Multimodal Filtering, Time-Sensitive and Real-time Search of Multimedia
Multimodal Multimedia Annotation Methodologies
Multimodal Sentiment-based Multimedia Retrieval and Annotation
Multimodal Context-based Multimedia Retrieval and Annotation
Multimodal Location-based Multimedia Retrieval and Annotation
Multimodal Relationship-based Multimedia Retrieval and Annotation
Multimodal Mobile-based Retrieval and Annotation of Big Multimedia
Multimodal Multimedia Data Modeling and Visualization
Multimodal Feature Extraction and Learning for Multimedia Data Representation
Multimodal Multimedia Data Embedding
Multimodal Medical Multimedia Information Retrieval
Multimodal Subjectivity Detection Extraction from Multimedia
Multimodal High-Level Semantic Features from Multimedia
Multimodal Information Fusion
Multimodal Affect Recognition
Multimodal Deep Learning in Multimedia and Multimodal Fusion
Multimodal Spatio-Temporal Multimedia Data Mining
Multimodal Multimedia based Massive Open Online Courses (MOOC)
Multimodal/Multisensor Integration and Analysis
Multimodal Affective and Perceptual Multimedia
Multimedia based Education

Last modified: 2017-11-04 16:31:46