Paper Publications
Date of Publication:2019-01-01Hits:
  • Indexed by:会议论文
  • Date of Publication:2019-01-01
  • Included Journals:EI、CPCI-S
  • Page Number:1922-1928
  • Key Words:reinforcement learning; memetic algorithm; evolution strategy; Q learning
  • Abstract:Neuroevolution (i.e., training neural network with Evolution Computation) has successfully unfolded a range of challenging reinforcement learning (RL) tasks. However, existing neuroevolution methods suffer from high sample complexity, as the black-box evaluations (i.e., accumulated rewards of complete Markov Decision Processes (MDPs)) discard bunches of temporal frames (i.e., time-step data instances in MDP). Actually, these temporal frames hold the Markov property of the problem, that benefits the training of neural network as well by temporal difference (TD) learning. In this paper, we propose a memetic reinforcement learning (MRL) framework that optimizes the RL agent by leveraging both black-box evaluations and temporal frames. To this end, an evolution strategy (ES) is associated with Q learning, where ES provides diversified frames to globally train the agent, and Q learning locally exploits the Markov property within frames to refresh the agent. Therefore, MRL conveys a novel memetic framework that allows evaluation free local search by Q learning. Experiments on classical control problem verify the efficiency of the proposed MRL, that achieves significantly faster convergence than canonical ES.
  • Date of Publication:2019-01-01

Doctoral Degree

MOBILE Version