ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

2011 - Workshop: New Developments in Imitation Learning

Date2011-07-02

Deadline2011-04-29

VenueWashington, USA - United States USA - United States

Keywords

Websitehttps://icml-2011.org

Topics/Call fo Papers

Quick Facts
Organizers: Abdeslam Boularias, Brian Ziebart, Jan Peters
Conference: ICML 2011
Date: Saturday, July 2, 2011
Room: TBA
Location: Bellevue, Washington, USA
Website: http://www.robot-learning.de/Research/ICML2011
Abstract
From a very early age, humans learn many new skills by the observation of others. This paradigm, known as imitation learning or learning from demonstration, is an important topic for many research fields, including psychology, neuroscience, artificial intelligence, and robotics. From a machine learning point of view, imitation learning is a supervised learning approach for solving control and sequential decision-making problems. This approach is often preferred to the fully autonomous one, as it avoids unnecessary and hazardous exploration. Consequently, most of successful applications of machine learning in robotics incorporate a form of imitation learning. Within the imitation learning community, relevant lines of research may be classified into the following sub-fields:
Direct imitation, where the problem of generalizing from provided examples is typically reduced to a supervised learning problem, without making assumptions on the teacher's intent.
Inverse optimal control, where the teacher is assumed to be maximizing a certain reward function, and the goal of the learner is to find the simplest reward function that explains the teacher's behavior.
Imitation learning is at the intersection of many fields of machine learning. It corresponds to a complex large-scale optimization problem that can be formalized in different ways, as a structured output prediction problem, or as a semi-supervised learning problem for example. Imitation learning is intimately related to reinforcement learning, online and active learning, multi-agent learning, and feature construction as well.
The workshop is supported by the PASCAL2 Thematic Programme on Machine Learning for Autonomous Skill Acquisition in Robotics and the IEEE RAS Technical Committee on Robot Learning.
Format
Our goal is to provide an overview of state-of-the-art techniques and applications of imitation learning, while bringing together researchers who have worked on imitation learning along with researchers from other areas of statistical machine learning aiming to bring new statistical learning techniques to bear on imitation learning. The workshop will consist of presentations, posters, and panel discussions. Topics to be addressed include, but are not limited to:
Which modern statistical learning techniques can be used for scaling imitation learning to more complex tasks?
How can we use recent advances in graphical models to partition the model learning part from the policy learning step?
How can we use recent advances in deep learning for hierarchical imitation learning?
How do different approaches of imitation learning compare to each other? In particular, what advantages of inverse optimal control over behavioral cloning remain for small data sets?
Can the essential correspondence problem in imitation learning be phrased in terms of statistical similarity?
How can the intention of the expert be recognized and reasoned about by the learner?
Which probabilistic models can be used for representing a theory of mind?
What are the advantages of imitation learning over reinforcement learning?
How can we improve the policies acquired through imitation learning by trial and error?
How can we learn by imitation in a partially observable environment?
How can we learn by imitation given very few examples? Can semi-supervised imitation learning give us the edge?
Which transfer learning techniques can be used for solving the correspondence problem?
Can we learn good policies from a bad expert?
Are there general methods for automatically extracting useful features for imitation learning?
How can we combine demonstrations provided by different experts?
Which probabilistic models provide a compact and representative view of a given task?
What are the biological foundations of imitation learning?
Call for Posters
In the field of imitation learning, we have seen a dramatic growth over the past decade in many different ways: in terms of newly developed algorithms, new successful applications and new scientific challenges for understanding both the computational and the neuronal aspects of imitation. Moreover, imitation is a complex learning problem related to many fields of Machine Learning, including supervised and semi-supervised learning, learning with structured data, transfer learning, reinforcement learning, multi-agent learning, and online learning.
The workshop will have an awesome set of invited speakers including Drew Bagnell, Pieter Abbeel, Aude Billard, Emo Todorov, Umar Syed, and Manuel Lopes.
For this workshop, we are seeking researchers who want to present high-quality recent or ongoing work on all aspects of imitation learning. Both theoretical and applied work is solicited. An extended abstract suffices for a poster submission. Additionally, we welcome position papers, as well as papers discussing potential future research directions.
Submissions and Publication
Both extended abstracts and position/future research papers will be reviewed by program committee members on the basis of relevance, significance, and clarity. Accepted contributions will be presented as posters but particularly exciting work may be considered for talks. Submissions should be formatted according to the conference templates and submitted via email to imitation.learning.icml2011-AT-googlemail.com.
Important Dates
April 29 - Deadline of submission
May 20 - Notification of Acceptance
July 2 - Workshop Proper
Organizing Committee
Abdeslam Boularias, Max Planck Institute for Biological Cybernetics (Primary contact)
Brian Ziebart, Robotics Institute of Carnegie Mellon University
Jan Peters, Max Planck Institute for Biological Cybernetics
Program
TBA
Participants
This workshop will bring together researchers from different areas of machine learning in order to explore how to approach new topics in imitation learning. Attendants of the workshop are encouraged to actively participate by responding with questions and comments about the talks.
Confirmed Invited Speakers
Drew Bagnell (Carnegie Mellon University)
Pieter Abbeel (University of California, Berkeley)
Aude Billard (EPFL Lausanne)
Umar Syed (University of Pennsylvania)
Chadwicke Jenkins (Brown University)
Sethu Vijayakumar (University of Edinburgh)
Emo Todorov (University of Washington)
Rajesh Rao (University of Washington)
Marc Toussaint (Free University of Berlin)
Manuel Lopes (INRIA)
Organizers
The workshop is organized by Abdeslam Boularias and Jan Peters from the Max Planck Institute for Biological Cybernetics as well as by Brian Ziebart from the Robotics Institute of Carnegie Mellon University, PA, USA.
Location and More Information
The most up-to-date information about ICML 2011 workshops can be found on the ICML website .

Last modified: 2011-02-26 14:19:07