ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

SSBRL 2014 - In Search of Synergies Between Reinforcement Learning and Evolutionary Computation

Date2014-09-13 - 2014-09-17

Deadline2014-03-17

VenueLjubljana , Slovakia Slovakia

Keywords

Websitehttp://ai.vub.ac.be/news/ppsn-2014

Topics/Call fo Papers

A recent trend in machine learning is the transfer of knowledge from one area to another. In this workshop, we focus on potential synergies between reinforcement learning and evolutionary computation: reinforcement learning (RL) addresses sequential decision problems in an initially unknown stochastic environment, requiring lots of computational resources while the main strength of evolutionary computation (EC) is its general applicability and computational efficiency. Although at first they seem very different, these two learning techniques address basically the same problem: the maximization of the agent's reward in a potentially unknown environment that is not always completely observable. Possibly, these machine learning methods can benefit from an exchange of ideas resulting in a better theoretical understanding and/or empirical efficiency.
Reinforcement learning (RL) is considered to be the most general online/offline learning technique that has to find a tradeoff between long-term and short-term rewards. RL has been successfully applied in disciplines likes game theory, robot control, control theory, operation research, etc. Online reinforcement learning involves finding a balance between exploration of unknown regions of an environment territories and exploitation of current knowledge. The exploration vs. exploitation tradeoff in reinforcement learning has been most thoroughly studied on the multi-armed bandit problem (MAB) and Markov decision processes (MDP). The MAB-problem is a simplified theoretical framework for reinforcement learning with a single state. MDPs is a mathematical framework for modeling decision making in situations where outcomes are partly under the control of a decision maker but might be partly random. Techniques that speed up exploration in reinforcement learning and at the same time to keep its convergence properties need to be designed.
Evolutionary computation (EC) is an efficient but basically offline technique for global optimization inspired by nature to explore/exploit the search space. But nowadays algorithms for dynamic and uncertain environments are emerging in evolutionary computation. Its settings are different even though the environments in EAs often are MDPS or resemble MDPs. Dynamic EC algorithms often update their parameters, e.g. genetic operators, at certain moments in time in order to track the optimum or to find a robust solution that operates optimally in the presence of uncertainties. The adaptation step is computational challenging, quasi dynamic and might lead to inaccurate results, posing serious challenges to conventional EC which are not conceptually designed to handle environmental changes.
There are already few examples that exploit the potential synergy between EC and RL. One example is multi-objective reinforcement learning. This is a variant of reinforcement learning that uses multiple rewards instead of a single one. Techniques from multi-objective EC are used for multi-objective RL in order to improve the exploration-exploitation tradeoff. An example in the other direction is the problem of selecting the best genetic operator that is similar to the problem of an RL-agent has to choose between alternatives while maximizing its cumulative expected reward.
Aim and scope
The main goal of this workshop is to solicit research and to start the discussion on potential synergies between RL and EC. We want to bring together researchers from machine learning, optimization, and artificial intelligence interested in searching difficult environments that are moreover possibly dynamic, uncertain and partially observable. We also encourage submissions describing applications of EC and RL for games, neural networks, and other real-world applications.
Ideally, this workshop will help researchers with a background in either RL or EC to find synergies between their work as well as new challenges and ideas.
Topics of interest
Topics of interests include but are not limited to:
Reinforcement learning using evolutionary algorithms or techniques,
Optimization algorithms including meta-heuristics, evolutionary algorithms, etc. for dynamic and uncertain environments,
Theoretical results on the learnability in dynamic and uncertain environments,
Novel evolutionary computation frameworks for dynamical environments,
Online self-adapting systems,
Online automatic configuration systems,
Games using optimization techniques,
Decision making in dynamic and uncertain environments,
Real-world applications in engineering, business, computer science, biological sciences, scientific computation, etc. in dynamic and uncertain environments,
Dynamic/reactive scheduling and planning
Organizers
Dr. Ing. Madalina M. Drugan,
Computational Modeling group, Vrije Universiteit Brussels, Belgium
e-mail: Madalina.Drugan-AT-vub.ac.be
Prof. dr. Bernard Manderick,
Computational Modeling group, Vrije Universiteit Brussels, Belgium
e-mail: Bernard.Manderick-AT-vub.ac.be

Last modified: 2014-02-01 14:51:06