ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

TiRL 2017 - 1st Workshop on Transfer in Reinforcement Learning (TiRL)

Date2017-05-08

Deadline2017-02-14

VenueSão Paulo, SP, Brazil Brazil

Keywords

Websitehttps://www.tirl.info

Topics/Call fo Papers

Reinforcement Learning (RL) has achieved many successes over the years in training autonomous agents to perform simple tasks. However, it takes a long time to learn a solution and this solution can usually only be applied to one specific task in a fixed setting.
Therefore, one of the major challenges in RL is to build intelligent agents that are able to transfer previously acquired knowledge to new tasks or transfer knowledge between agents in Multiagent RL systems. In this context, Transfer Learning (TL) describes an increasingly popular approach to accelerate learning by reusing and adapting knowledge.
Although there has already been some work on transfer for RL, this topic is gaining new interest through the rise of more sophisticated tools in RL which offer new possibilities. Currently there exists no general method which is able to learn autonomously what and how to transfer without additional background information about the task or the environment.
Aims
====
This workshop encourages the discussion of diverse approaches to accelerate and generalize RL with knowledge transfer. Our goal is to stimulate the investigation of different approaches to TL with the goal to get closer to a general unifying approach to TL in RL. Scaling-up RL methods with TL could have major implications on the research and practice on complex learning problems and will eventually lead to successful implementations in real-world applications.
We aim to bring together researchers working on different aspects to improve TL for RL with the goal to solve more complex problems more efficiently. We intend to make this an exciting event for researchers worldwide, not only for the presentation of top quality papers, but also to spark the discussion of opportunities and challenges for future research directions.
Topics of Interest
===
Papers submitted to the event should address topics related to autonomous agents or multi-agent systems, such as (but not restricted to):
Examples of covered topics include (but are not limited to):
- Transfer Learning in single-agent Reinforcement Learning
- Transfer Learning in multiagent Reinforcement Learning
- Transfer in Deep Reinforcement Learning
- Skill / Behaviour Transfer in Reinforcement Learning
- Human-guided Transfer in Reinforcement Learning
- Multi-task Reinforcement Learning
- Reinforcement Learning in Lifelong Machine Learning
- Novel benchmarks for Transfer in Reinforcement Learning
- Transfer in Multiobjective Reinforcement Learning
- Transfer from Inverse Reinforcement Learning
- Abstractions for Transfer in Reinforcement Learning
- Real-world applications for Transfer in Reinforcement Learning
Paper Submission
===
We invite you to submit papers that combine transfer learning with reinforcement learning. The deadline for submission is February 14, 2017 (23:59 UTC -12:00), and decisions will be sent out on March 7, 2017.
Authors are encouraged to submit to any of the following categories:
- Research Paper: Same format as for the main AAMAS conference. Papers must be 5 - 8 pages in length, with any additional pages containing only bibliographic references. The authors are expected to present a contribution to the field.
- Short paper: Paper must be 2 pages in length, plus 1 page for references. The authors should either provide an extended abstract of ongoing work relevant to the workshop, or a highlight paper, summarizing full papers that have been published or accepted for publication at most 1 year before the workshop deadline.
Submissions should be in the AAMAS-17 format. The review process is double-blind and each submitted paper will be reviewed by at least two reviewers. Papers will be judged according to the chosen category, significance to the workshop, proposal quality, and clarity.
Papers must be submitted through EasyChair . For any doubts, please send an email to organization-AT-tirl.info.
Invited Speakers (Partial List)
===
Jivko Sinapov (University of Texas at Austin, USA)
"Curriculum Construction for RL Agents"
Committees
===
Workshop Chairs:
- Anna Helena Reali Costa (University of São Paulo, Brazil)
- Doina Precup (McGill University, Canada)
- Manuela Veloso (Carnegie Mellon University, USA)
- Matthew Taylor (Washington State University, USA)
Local Organizing Committee:
- Felipe Leno da Silva (University of São Paulo, Brazil)
- Ruben Glatt (University of São Paulo, Brazil)
Program Committee:
Bo An (Nanyang Technological University, Singapore)
Haitham Bou Ammar (Princeton University, USA)
Ana Lúcia Bazzan (Federal University of Rio Grande do Sul, Brazil)
Reinaldo Bianchi (Centro Universitário FEI, Brazil)
Jesse Davis (Katholieke Universiteit Leuven, Belgium)
Sam Devlin (University of York, USA)
Eric Eaton (University of Pennsylvania, USA)
Valdinei Freire (University of São Paulo, Brazil)
Ruben Glatt (University of São Paulo, Brazil)
Andrey Kolobov (Microsoft Research, USA)
George Konidaris (Brown University, USA)
Alessandro Lazaric (INRIA Lille-Nord Europe, France)
Matteo Leonetti (University of Texas at Austin, USA)
Francisco Melo (Instituto Superior Técnico, Portugal)
Sanmit Narvekar (University of Texas at Austin, USA)
Ann Nowe (Vrije Universiteit Brussel, Belgium)
Philippe Preux (INRIA, Université de Lille, France)
Ramya Ramakrishnan (Massachusetts Institute of Technology, USA)
Felipe Leno da Silva (University of São Paulo, Brazil)
Bruno Castro da Silva (Federal University of Rio Grande do Sul, Brazil)
Jivko Sinapov (University of Texas at Austin, USA)
Lisa Torrey (St. Lawrence University, USA)
Peter Vrancx (Vrije Universiteit Brussel, Belgium)

Last modified: 2017-02-12 21:14:27