Introducing decision entrustment mechanism into repeated bilateral agent interactions to achieve social optimality

Verfasser / Beitragende:
[Jianye Hao, Ho-fung Leung]
Ort, Verlag, Jahr:
2015
Enthalten in:
Autonomous Agents and Multi-Agent Systems, 29/4(2015-07-01), 658-682
Format:
Artikel (online)
ID: 605514763
LEADER caa a22 4500
001 605514763
003 CHVBK
005 20210128100706.0
007 cr unu---uuuuu
008 210128e20150701xx s 000 0 eng
024 7 0 |a 10.1007/s10458-014-9265-1  |2 doi 
035 |a (NATIONALLICENCE)springer-10.1007/s10458-014-9265-1 
245 0 0 |a Introducing decision entrustment mechanism into repeated bilateral agent interactions to achieve social optimality  |h [Elektronische Daten]  |c [Jianye Hao, Ho-fung Leung] 
520 3 |a During multiagent interactions, robust strategies are needed to help the agents to coordinate their actions on efficient outcomes. A large body of previous work focuses on designing strategies towards the goal of Nash equilibrium under self-play, which can be extremely inefficient in many situations such as prisoner's dilemma game. To this end, we propose an alternative solution concept, socially optimal outcome sustained by Nash equilibrium (SOSNE), which refers to those outcomes that maximize the sum of all agents' payoffs among all the possible outcomes that can correspond to a Nash equilibrium payoff profile in the infinitely repeated games. Adopting the solution concept of SOSNE guarantees that the system-level performance can be maximized provided that no agent will sacrifice its individual profits. On the other hand, apart from performing well under self-play, a good strategy should also be able to well respond against those opponents adopting different strategies as much as possible. To this end, we consider a particular class of rational opponents and we target at influencing those opponents to coordinate on SOSNE outcomes. We propose a novel learning strategy TaFSO which combines the characteristics of both teacher and follower strategies to effectively influence the opponent's behavior towards SOSNE outcomes by exploiting their limitations. Extensive simulations show that our strategy TaFSO achieves better performance in terms of average payoffs obtained than previous work under both self-play and against the same class of rational opponents. 
540 |a The Author(s), 2014 
690 7 |a Multiagent learning  |2 nationallicence 
690 7 |a Repeated games  |2 nationallicence 
690 7 |a Socially optimal outcomes sustained by Nash equilibrium  |2 nationallicence 
700 1 |a Hao  |D Jianye  |u Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China  |4 aut 
700 1 |a Leung  |D Ho-fung  |u Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China  |4 aut 
773 0 |t Autonomous Agents and Multi-Agent Systems  |d Springer US; http://www.springer-ny.com  |g 29/4(2015-07-01), 658-682  |x 1387-2532  |q 29:4<658  |1 2015  |2 29  |o 10458 
856 4 0 |u https://doi.org/10.1007/s10458-014-9265-1  |q text/html  |z Onlinezugriff via DOI 
898 |a BK010053  |b XK010053  |c XK010000 
900 7 |a Metadata rights reserved  |b Springer special CC-BY-NC licence  |2 nationallicence 
908 |D 1  |a research-article  |2 jats 
949 |B NATIONALLICENCE  |F NATIONALLICENCE  |b NL-springer 
950 |B NATIONALLICENCE  |P 856  |E 40  |u https://doi.org/10.1007/s10458-014-9265-1  |q text/html  |z Onlinezugriff via DOI 
950 |B NATIONALLICENCE  |P 700  |E 1-  |a Hao  |D Jianye  |u Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China  |4 aut 
950 |B NATIONALLICENCE  |P 700  |E 1-  |a Leung  |D Ho-fung  |u Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China  |4 aut 
950 |B NATIONALLICENCE  |P 773  |E 0-  |t Autonomous Agents and Multi-Agent Systems  |d Springer US; http://www.springer-ny.com  |g 29/4(2015-07-01), 658-682  |x 1387-2532  |q 29:4<658  |1 2015  |2 29  |o 10458