Correlation Through Bounded Recall Strategies

Abstract:

Two agents independently choose mixed m-recall strategies that take actions in finite action spaces A1 and A2. The strategies induce a random play, a1,a2,..., where at assumes values in A1 X A2. An M-recall observer observes the play. The goal of the agents is to make the observer believe that the play is similar to a sequence of i.i.d. random actions whose distribution is Q in Delta(A1 X A2). For nearly every t, the following event should occur with probability close to one: "the distribution of a_t+M given at a_t,..,a_t+M is close to Q." We provide a sufficient and necessary condition on m, M, and Q under which this goal can be achieved (for large m). This work is a step in the direction of establishing a folk theorem for repeated games with bounded recall. It tries to tackle the difficulty in computing the individually rational levels (IRL) in the bounded recall setting. Our result implies, for example, that in some games the IRL in the bounded recall game is bounded away below the IRL in the stage game, even when all the players have the same recall capacity.

Website