ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

DTL 2020 - Third IEEE International Workshop on Deep and Transfer Learning (DTL2020)

Date2020-10-19 - 2020-10-22

Deadline2020-09-15

VenueValencia, Spain Spain

Keywords

Websitehttps://intelligenttech.org/DTL2020

Topics/Call fo Papers

Deep learning approaches have caused tremendous advances in many areas of computer science. Deep learning is a branch of machine learning where the learning process is done using deep and complex architectures such as recurrent convolutional artificial neural networks. Many computer science applications have utilized deep learning such as computer vision, speech recognition, natural language processing, sentiment analysis, social network analysis, and robotics. The success of deep learning enabled the application of learning models such as reinforcement learning in which the learning process is only done by trial-and-error, solely from actions rewards or punishments. Deep reinforcement learning come to create systems that can learn how to adapt in the real world. As deep learning utilizes deep and complex architectures, the learning process usually is time and effort consuming and need huge labeled data sets. This inspired the introduction of transfer and multi-task learning approaches to better exploit the available data during training and adapt previously learned knowledge to emerging domains, tasks, or applications. Despite the fact that many research activities is ongoing in these areas, many challenging are still unsolved. This workshop will bring together researchers working on deep learning, working on the intersection of deep learning and reinforcement learning, and/or using transfer learning to simplify deep leaning, and it will help researchers with expertise in one of these fields to learn about the others. The workshop also aims to bridge the gap between theories and practices by providing the researchers and practitioners the opportunity to share ideas and discuss and criticize current theories and results. We invite the submission of original papers on all topics related to deep learning, deep reinforcement learning, and transfer and multi-task learning, with special interest in but not limited to:
Deep learning for innovative applications such machine translation, computational biology
Deep Learning for Natural Language Processing
Deep Learning for Recommender Systems
Deep learning for computer vision
Deep learning for systems and networks resource management
Optimization for Deep Learning
Deep Reinforcement Learning
Deep transfer learning for robots
Determining rewards for machines
Machine translation
Energy consumption issues in deep reinforcement learning
Deep reinforcement learning for game playing
Stabilize learning dynamics in deep reinforcement learning
Scaling up prior reinforcement learning solutions
Deep Transfer and multi-task learning:
New perspectives or theories on transfer and multi-task learning
Dataset bias and concept drift
Transfer learning and domain adaptation
Multi-task learning
Feature based approaches
Instance based approaches
Deep architectures for transfer and multi-task learning
Transfer across different architectures, e.g. CNN to RNN
Transfer across different modalities, e.g. image to text
Transfer across different tasks, e.g. object recognition and detection
Transfer from weakly labeled or noisy data, e.g. Web data
Datasets, benchmarks, and open-source packages
Recourse efficient deep learning

Last modified: 2020-10-04 11:37:52