HSLR 2013 - Workshop: Hierarchical and Structured Learning for Robotics
Topics/Call fo Papers
Learning robot control policies in complex real-world environments is a major challenge for machine learning due to the inherent high dimensionality, partial observability and the high costs of data generation. Treating robot learning as a monolithic machine problem and employing off-the-shelf approaches is unrealistic at best. However, the physical world can yield important insights into the inherent structure of control policies, state or action spaces and reward functions. For example, many robot motor tasks are also hierarchically structured decision tasks. For example, a tennis playing robot has to combine different striking movements sequentially. During locomotion there are at least three behaviors simultaneously active as a robot has to combine its gait generation with foot placement and balance control. First domain-driven skill learning approaches have already yielded impressive recent successes by incorporating such structural insights into the learning process. Hence, a promising route to more scalable policy learning approaches includes the automatic exploitation of the environment's structure, resulting in new structured learning approaches for robot control.
Structured and hierarchical learning has been an important trend in machine learning in recent years. In robotics, researchers often ended up naturally at well-structured hierarchical policies based on discrete-continuous partitions (e.g., define local movement generators as well as a prioritized operational space control for combining them) with nested control loops at several different speeds (i.e., fast control loops for smooth and accurate movement achievement, slower loops for model-predictive planning). Furthermore, evidence from the field cognitive sciences indicate that humans also heavily exploit such structures and hierarchies. Although such structures have been found in human motor control, are favored in robot control and exist in machine learning, the connections between these fields have not been well explored. Transferring insights from structured prediction methods, which make use of the inherent correlation in the data, to hierarchical robot skill learning may be a crucial step. General approaches for bringing structured policies, states, actions and rewards into robot reinforcement learning may well be the key to tackle many challenges of real-world robot environments and an important step to the vision of intelligent autonomous robots which can learn rich and versatile sets of motor skills. This workshop aims to reveal how complex motor skills typically exhibit structures that can be exploited for learning reward functions and to find structure in the state or action space.
In order to make progress towards the goal of structured learning for robot control, this workshop aims at researchers from different machine learning areas (such as reinforcement learning, structured prediction), robotics and related disciplines (e.g., control engineering, and the cognitive sciences).
We particularly want to focus on the following important topics for structured robot learning which have a big overlap from several of these fields:
Efficient representations and learning methods for hierarchical policies
Learning in several layers of hierarchy
Structured representations for motor control and planning
Skill extraction and skill transfer
Sequencing and composition of behaviors
Hierarchical Bayesian Models for decision making and efficient transfer learning
Low-dimensional manifolds as structured representations for decision making
Exploiting correlations in the decision making process
Prioritized control policies in a multi-task reinforcement learning setup
These challenges are important steps to building intelligent autonomous robots and may potentially motivate new research topics in the related research fields.
Format
The aim of this workshop is to bring together researchers which are interested in structured representations, reinforcement learning, hierarchical learning methods and control architectures. Among these general topics we will focus on the following questions:
Structured representations:
How to efficiently use graphical models such as Markov random fields to exploit correlations in the decision making process?
How to extract the relevant structure (e.g. low dimensional manifolds, factorizations...) from the state and action space?
Can we efficiently model structure in the reward function or the system dynamics?
How to learn good features for the policy or the value function?
What can we learn from structured prediction?
Representations of behavior:
What are good representations for motor skills?
How can we efficiently reuse skills in new situations?
How can we extract movement skills and elemental movements from demonstrations?
How can we compose skills to solve a combination of tasks?
How can we represent versatile motor skills?
How can we represent and exploit the correlations over time in the decision process?
Structured Control:
How to efficiently use structured representations for planning and control?
Can we learn task-priorities and use similar policies as in task-prioritized control?
How to decompose optimal control laws into elemental movements ?
How to use low-dimensional manifolds to control high-dimensional, redundant systems?
Can we use chain or tree-like structures as policy representation to mimic the kinematic structure of the robot?
Hierarchical Learning Methods:
How can we efficiently apply abstractions to the control problem?
How to efficiently learn at several layers of hierarchy?
Which policy search algorithms are appropriate for which hierarchical representation?
Can we use hierarchical inverse reinforcement learning to acquire skill reward functions, and priors over selecting those skills?
How can we decide when to create new skills or re-use known ones?
How can we discover and generalize important sub-goals in our movement plan?
Skill Transfer:
How can we efficiently transfer skills to new situations?
Can we use hierarchical Bayesian models to learn in several layers of abstraction also in decision making?
How to transfer learned models or even value functions to new tasks?
Important Dates
June 1st - Deadline of submission for Posters
June 4th - Notification of Poster Acceptance
Submissions
Extended abstracts (1 pages) will be reviewed by the program committee members on the basis of relevance, significance, and clarity. Accepted contributions will be presented as posters but particularly exciting work may be considered for talks. Submissions should be formatted according to the conference templates and submitted via email to neumann-AT-ias.tu-darmstadt.de.
Organizers
Gerhard Neumann, Technische Universitaet Darmstadt George Konidaris, MIT Computer Science and Artificial Intelligence Laboratory Freek Stulp, ENSTA - ParisTech Jan Peters, Technische Universitaet Darmstadt and Max Planck Institute for Intelligent Systems
Location and more information
The most up-to-date information about the workshop can be found on the RSS 2013 webpage.
Structured and hierarchical learning has been an important trend in machine learning in recent years. In robotics, researchers often ended up naturally at well-structured hierarchical policies based on discrete-continuous partitions (e.g., define local movement generators as well as a prioritized operational space control for combining them) with nested control loops at several different speeds (i.e., fast control loops for smooth and accurate movement achievement, slower loops for model-predictive planning). Furthermore, evidence from the field cognitive sciences indicate that humans also heavily exploit such structures and hierarchies. Although such structures have been found in human motor control, are favored in robot control and exist in machine learning, the connections between these fields have not been well explored. Transferring insights from structured prediction methods, which make use of the inherent correlation in the data, to hierarchical robot skill learning may be a crucial step. General approaches for bringing structured policies, states, actions and rewards into robot reinforcement learning may well be the key to tackle many challenges of real-world robot environments and an important step to the vision of intelligent autonomous robots which can learn rich and versatile sets of motor skills. This workshop aims to reveal how complex motor skills typically exhibit structures that can be exploited for learning reward functions and to find structure in the state or action space.
In order to make progress towards the goal of structured learning for robot control, this workshop aims at researchers from different machine learning areas (such as reinforcement learning, structured prediction), robotics and related disciplines (e.g., control engineering, and the cognitive sciences).
We particularly want to focus on the following important topics for structured robot learning which have a big overlap from several of these fields:
Efficient representations and learning methods for hierarchical policies
Learning in several layers of hierarchy
Structured representations for motor control and planning
Skill extraction and skill transfer
Sequencing and composition of behaviors
Hierarchical Bayesian Models for decision making and efficient transfer learning
Low-dimensional manifolds as structured representations for decision making
Exploiting correlations in the decision making process
Prioritized control policies in a multi-task reinforcement learning setup
These challenges are important steps to building intelligent autonomous robots and may potentially motivate new research topics in the related research fields.
Format
The aim of this workshop is to bring together researchers which are interested in structured representations, reinforcement learning, hierarchical learning methods and control architectures. Among these general topics we will focus on the following questions:
Structured representations:
How to efficiently use graphical models such as Markov random fields to exploit correlations in the decision making process?
How to extract the relevant structure (e.g. low dimensional manifolds, factorizations...) from the state and action space?
Can we efficiently model structure in the reward function or the system dynamics?
How to learn good features for the policy or the value function?
What can we learn from structured prediction?
Representations of behavior:
What are good representations for motor skills?
How can we efficiently reuse skills in new situations?
How can we extract movement skills and elemental movements from demonstrations?
How can we compose skills to solve a combination of tasks?
How can we represent versatile motor skills?
How can we represent and exploit the correlations over time in the decision process?
Structured Control:
How to efficiently use structured representations for planning and control?
Can we learn task-priorities and use similar policies as in task-prioritized control?
How to decompose optimal control laws into elemental movements ?
How to use low-dimensional manifolds to control high-dimensional, redundant systems?
Can we use chain or tree-like structures as policy representation to mimic the kinematic structure of the robot?
Hierarchical Learning Methods:
How can we efficiently apply abstractions to the control problem?
How to efficiently learn at several layers of hierarchy?
Which policy search algorithms are appropriate for which hierarchical representation?
Can we use hierarchical inverse reinforcement learning to acquire skill reward functions, and priors over selecting those skills?
How can we decide when to create new skills or re-use known ones?
How can we discover and generalize important sub-goals in our movement plan?
Skill Transfer:
How can we efficiently transfer skills to new situations?
Can we use hierarchical Bayesian models to learn in several layers of abstraction also in decision making?
How to transfer learned models or even value functions to new tasks?
Important Dates
June 1st - Deadline of submission for Posters
June 4th - Notification of Poster Acceptance
Submissions
Extended abstracts (1 pages) will be reviewed by the program committee members on the basis of relevance, significance, and clarity. Accepted contributions will be presented as posters but particularly exciting work may be considered for talks. Submissions should be formatted according to the conference templates and submitted via email to neumann-AT-ias.tu-darmstadt.de.
Organizers
Gerhard Neumann, Technische Universitaet Darmstadt George Konidaris, MIT Computer Science and Artificial Intelligence Laboratory Freek Stulp, ENSTA - ParisTech Jan Peters, Technische Universitaet Darmstadt and Max Planck Institute for Intelligent Systems
Location and more information
The most up-to-date information about the workshop can be found on the RSS 2013 webpage.
Other CFPs
- 2013 10th Annual International Human Trafficking, Prostitution and Sex Work Conference
- International Conference on Innovative trends in Electronics Communication and Applications
- NATIONAL WORKSHOP ON ADVANCED RESEARCH METHODOLOGY TECHNIQUES
- National Conference on Advances in Mathematics & its Applications
- National Conference on Challenges, Trends and Opportunities in Organizations
Last modified: 2013-05-13 23:10:52