Relative entropy Inverse Reinforcement Learning

Abdeslam Boularias, Jens Kober, Jan Peters

Research output: Contribution to journalConference articlepeer-review

96 Scopus citations

Abstract

We consider the problem of imitation learning where the examples, demonstrated by an expert, cover only a small part of a large state space. Inverse Reinforcement Learning (IRL) provides an efficient tool for generalizing the demonstration, based on the assumption that the expert is optimally acting in a Markov Decision Process (MDP). Most of the past work on IRL requires that a (near)- optimal policy can be computed for different reward functions. However, this requirement can hardly be satisfied in systems with a large, or continuous, state space. In this paper, we propose a model-free IRL algorithm, where the relative entropy between the empirical distribution of the state-action trajectories under a baseline policy and their distribution under the learned policy is minimized by stochastic gradient descent. We compare this new approach to well-known IRL algorithms using learned MDP models. Empirical results on simulated car racing, gridworld and ball-in-a-cup problems show that our approach is able to learn good policies from a small number of demonstrations.

Original languageAmerican English
Pages (from-to)182-189
Number of pages8
JournalJournal of Machine Learning Research
Volume15
StatePublished - 2011
Externally publishedYes
Event14th International Conference on Artificial Intelligence and Statistics, AISTATS 2011 - Fort Lauderdale, FL, United States
Duration: Apr 11 2011Apr 13 2011

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Relative entropy Inverse Reinforcement Learning'. Together they form a unique fingerprint.

Cite this