NMLO 2013 - WORKSHOP ON NOVEL METHODS FOR LEARNING AND OPTIMIZATION OF CONTROL POLICIES AND TRAJECTORIES FOR ROBOTICS
Topics/Call fo Papers
The current challenges defined for robots require them to automatically generate and control a wide range of motions in order to be more flexible and adaptive in uncertain and changing environments. However, anthropomorphic robots with many degrees of freedom are complex dynamical systems. The generation and control of motions for such systems are very demanding tasks. Cost functions appear to be the most succinct way of describing desired behavior without over- specification and appear to underlie human movement generation in pointing/reaching movement as well as locomotion. Common cost functions in robotics include goal achievement, minimization of energy consumption, minimization of time, etc. A myriad of approaches have been suggested to obtain control policies and trajectories that are optimal with respect to such cost function. However, to date, it remains an open question what is the best algorithm for designing or learning optimal control policies and trajectories in robotics would work. The goal ofthis workshop is to gather researchers working in robot learning with researchers working in optimal control, in order to give an overview of thestate of the art and to discuss how both fields could learn from each other and potentially join forces to work on improved motion generation andcontrol methods for the robotics community. Some of the core topics are:
- State of the art methods in model-based optimal control and model predictive control for robotics as well as inverse optimal control
- State of the art methods in robot learning, model learning, imitation learning, reinforcement learning, inverse reinforcement learning, etc .
- Shared open questions in both reinforcement learning and optimal control approaches
- How could methods from optimal control and machine learning be combined?
FORMAT
The workshop will consist of presentations, posters, and panel discussions. Topics to be addressed include, but are not limited to:
- How far can optimal control approaches based on analytical models come?
- When using learned models, will the optimization biases be increased orreduced?
- Can a mix of analytical and learned models help?
- Can a full Bayesian treatment of model errors ensure high performance in general?
- What are the advantages and disadvantages of model-free and model-basedapproaches?
- How does real-time optimization / model predictive control relate to learning?
- Is it easier to optimize a trajectory or a control policy?
- Which can be represented with fewer parameters?
- Is it easier to optimize a trajectory/control policy directly in parameter space or to first obtain a value function for subsequent backwards steps?
- Is less data needed for learning a model (to be used in optimal control, or model-based reinforcement learning) or for directly learning an optimal control policy from data?
- What applications in robotics are better suited for model-based, model-learning and model-free approaches?
All of these questions are of crucial importance for furthering the state-of-the-art both in optimal control and in robot reinforcement learning. The goal of this workshop is to gather researchers working in robot learning with researchers working in optimal control, in order to give an overview of the state of the art and to discuss how both fields could learn from each other and potentially join forces to work on improved motion generation and control methods for the robotics community.
IMPORTANT DATES
March 15 - Deadline of submission for Posters
March 20th - Notification of Poster Acceptance
SUBMISSIONS
Extended abstracts (1 pages) will be reviewed by the program committee members on the basis of relevance, significance, and clarity. Accepted contributions will be presented as posters but particularly exciting work maybe considered for talks. Submissions should be formatted according to the conference templates and submitted via email toneumann-AT-ias.tu-darmstadt.de.
ORGANIZERS
Katja Mombaur, Universitaet Heidelberg
Gerhard Neumann, Technische Universitaet Darmstadt
Martin Felis, Universitaet Heidelberg
Jan Peters, Technische Universitaet Darmstadt and Max Planck Institute for Intelligent Systems
LOCATION AND MORE INFORMATION
The most up-to-date information about the workshop can be found on the ICRA 2013 webpage.
- State of the art methods in model-based optimal control and model predictive control for robotics as well as inverse optimal control
- State of the art methods in robot learning, model learning, imitation learning, reinforcement learning, inverse reinforcement learning, etc .
- Shared open questions in both reinforcement learning and optimal control approaches
- How could methods from optimal control and machine learning be combined?
FORMAT
The workshop will consist of presentations, posters, and panel discussions. Topics to be addressed include, but are not limited to:
- How far can optimal control approaches based on analytical models come?
- When using learned models, will the optimization biases be increased orreduced?
- Can a mix of analytical and learned models help?
- Can a full Bayesian treatment of model errors ensure high performance in general?
- What are the advantages and disadvantages of model-free and model-basedapproaches?
- How does real-time optimization / model predictive control relate to learning?
- Is it easier to optimize a trajectory or a control policy?
- Which can be represented with fewer parameters?
- Is it easier to optimize a trajectory/control policy directly in parameter space or to first obtain a value function for subsequent backwards steps?
- Is less data needed for learning a model (to be used in optimal control, or model-based reinforcement learning) or for directly learning an optimal control policy from data?
- What applications in robotics are better suited for model-based, model-learning and model-free approaches?
All of these questions are of crucial importance for furthering the state-of-the-art both in optimal control and in robot reinforcement learning. The goal of this workshop is to gather researchers working in robot learning with researchers working in optimal control, in order to give an overview of the state of the art and to discuss how both fields could learn from each other and potentially join forces to work on improved motion generation and control methods for the robotics community.
IMPORTANT DATES
March 15 - Deadline of submission for Posters
March 20th - Notification of Poster Acceptance
SUBMISSIONS
Extended abstracts (1 pages) will be reviewed by the program committee members on the basis of relevance, significance, and clarity. Accepted contributions will be presented as posters but particularly exciting work maybe considered for talks. Submissions should be formatted according to the conference templates and submitted via email toneumann-AT-ias.tu-darmstadt.de.
ORGANIZERS
Katja Mombaur, Universitaet Heidelberg
Gerhard Neumann, Technische Universitaet Darmstadt
Martin Felis, Universitaet Heidelberg
Jan Peters, Technische Universitaet Darmstadt and Max Planck Institute for Intelligent Systems
LOCATION AND MORE INFORMATION
The most up-to-date information about the workshop can be found on the ICRA 2013 webpage.
Other CFPs
- Workshop on Spectral Learning
- The International Conference on Advanced Technologies for Communications 2013 (ATC'13)
- The 5th International Conference on Fixed Combination in the Treatment of Hypertension, Dyslipidemia and Diabetes Mellitus
- International Workshop on Innovative Teaching and Learning in Information Technology Quantitative Management
- The 8th Workshop on Network Security (WNS 2013)
Last modified: 2013-03-05 07:20:33