Hits:
Indexed by:Journal Papers
Date of Publication:2015-12-01
Journal:IEEE TRANSACTIONS ON CYBERNETICS
Included Journals:SCIE、EI、Scopus
Volume:45
Issue:12
Page Number:2853-2867
ISSN No.:2168-2267
Key Words:Agent independence; coordination; multiagent learning (MAL); reinforcement learning (RL); sparse interactions
Abstract:Multiagent learning (MAL) is a promising technique for agents to learn efficient coordinated behaviors in multiagent systems (MASs). In MAL, concurrent multiple distributed learning processes can make the learning environment nonstationary for each individual learner. Developing an efficient learning approach to coordinate agents' behaviors in this dynamic environment is a difficult problem, especially when agents do not know the domain structure and have only local observability of the environment. In this paper, a coordinated MAL approach is proposed to enable agents to learn efficient coordinated behaviors by exploiting agent independence in loosely coupled MASs. The main feature of the proposed approach is to explicitly quantify and dynamically adapt agent independence during learning so that agents can make a trade-off between a single-agent learning process and a coordinated learning process for an efficient decision making. The proposed approach is employed to solve two-robot navigation problems in different scales of domains. Experimental results show that agents using the proposed approach can learn to act in concert or independently in different areas of the environment, which results in great computational savings and near optimal performance.