Reward-Guided Synthesis of Intelligent Agents with Control Structures

Guofeng Cui, Yuning Wang, Wenjie Qiu, He Zhu

Research output: Contribution to journalArticlepeer-review

Abstract

Deep reinforcement learning (RL) has led to encouraging successes in numerous challenging robotics applications. However, the lack of inductive biases to support logic deduction and generalization in the representation of a deep RL model causes it less effective in exploring complex long-horizon robot-control tasks with sparse reward signals. Existing program synthesis algorithms for RL problems inherit the same limitation, as they either adapt conventional RL algorithms to guide program search or synthesize robot-control programs to imitate an RL model. We propose ReGuS, a reward-guided synthesis paradigm, to unlock the potential of program synthesis to overcome the exploration challenges. We develop a novel hierarchical synthesis algorithm with decomposed search space for loops, on-demand synthesis of conditional statements, and curriculum synthesis for procedure calls, to effectively compress the exploration space for long-horizon, multi-stage, and procedural robot-control tasks that are difficult to address by conventional RL techniques. Experiment results demonstrate that ReGuS significantly outperforms state-of-The-Art RL algorithms and standard program synthesis baselines on challenging robot tasks including autonomous driving, locomotion control, and object manipulation.

Original languageAmerican English
Article number217
JournalProceedings of the ACM on Programming Languages
Volume8
DOIs
StatePublished - Jun 20 2024
Externally publishedYes

ASJC Scopus subject areas

  • Software
  • Safety, Risk, Reliability and Quality

Keywords

  • Program Synthesis
  • Sequential Decision Making

Fingerprint

Dive into the research topics of 'Reward-Guided Synthesis of Intelligent Agents with Control Structures'. Together they form a unique fingerprint.

Cite this