Current position: Home >> Scientific Research >> Paper Publications

Research on actor-critic reinforcement learning in RoboCup

Release Time:2019-03-12  Hits:

Indexed by: Conference Paper

Date of Publication: 2006-06-21

Included Journals: Scopus、CPCI-S、EI

Volume: 2

Page Number: 205-205

Key Words: reinforcement learning; MAS; actor-critic; RoboCup; function approximation

Abstract: Actor-Critic method combines the fast convergence of value-based (Critic) and directivity on search of policy gradient (Actor). It is suitable for solving the problems with large state space. In this paper, the Actor Critic method with the tile-coding linear function approximation is analysed and applied to a RoboCup simulation subtask named "Soccer Keepaway". The experiments on Soccer Keepaway show that the policy learned by Actor-Critic method is better than policies from value-based Sarsa(lambda) and benchmarks.

Prev One:机器人三维定位系统中关键技术的研究

Next One:基于软件体系结构的企业应用集成