Perfect Bayesian equilibrium

In game theory, a Perfect Bayesian Equilibrium (PBE) is a solution with Bayesian probability to a turn-based game with incomplete information. More specifically, it is an equilibrium concept that uses Bayesian updating to describe player behavior in dynamic games with incomplete information. Perfect Bayesian equilibria are used to solve the outcome of games where players take turns but are unsure of the "type" of their opponent, which occurs when players don't know their opponent's preference between individual moves. A classic example of a dynamic game with types is a war game where the player is unsure whether their opponent is a risk-taking "hawk" type or a pacifistic "dove" type. Perfect Bayesian Equilibria are a refinement of Bayesian Nash equilibrium (BNE), which is a solution concept with Bayesian probability for non-turn-based games.

Perfect Bayesian Equilibrium
Solution concept in game theory
Relationship
Subset ofBayesian Nash equilibrium
Significance
Proposed byCho and Kreps[citation needed]
Used forDynamic Bayesian games
Examplesignaling game

Any perfect Bayesian equilibrium has two components -- strategies and beliefs:

  • The strategy of a player in a given information set specifies his choice of action in that information set, which may depend on the history (on actions taken previously in the game). This is similar to a sequential game.
  • The belief of a player in a given information set determines what node in that information set he believes the game has reached. The belief may be a probability distribution over the nodes in the information set, and is typically a probability distribution over the possible types of the other players. Formally, a belief system is an assignment of probabilities to every node in the game such that the sum of probabilities in any information set is 1.

The strategies and beliefs also must satisfy the following conditions:

  • Sequential rationality: each strategy should be optimal in expectation, given the beliefs.
  • Consistency: each belief should be updated according to the equilibrium strategies, the observed actions, and Bayes' rule on every path reached in equilibrium with positive probability. On paths of zero probability, known as off-equilibrium paths, the beliefs must be specified but can be arbitrary.

A perfect Bayesian equilibrium is always a Nash equilibrium.

Examples of perfect Bayesian equilibria

edit

Gift game 1

edit

Consider the following game:

  • The sender has two possible types: either a "friend" (with probability  ) or an "enemy" (with probability  ). Each type has two strategies: either give a gift, or not give.
  • The receiver has only one type, and two strategies: either accept the gift, or reject it.
  • The sender's utility is 1 if his gift is accepted, -1 if his gift is rejected, and 0 if he does not give any gift.
  • The receiver's utility depends on who gives the gift:
    • If the sender is a friend, then the receiver's utility is 1 (if he accepts) or 0 (if he rejects).
    • If the sender is an enemy, then the receiver's utility is -1 (if he accepts) or 0 (if he rejects).

For any value of   Equilibrium 1 exists, a pooling equilibrium in which both types of sender choose the same action:

Equilibrium 1. Sender: Not give, whether they are the friend type or the enemy type. Receiver: Do not accept, with the beliefs that Prob(Friend|Not Give) = p and Prob(Friend|Give) = x, choosing a value  

The sender prefers the payoff of 0 from not giving to the payoff of -1 from sending and not being accepted. Thus, Give has zero probability in equilibrium and Bayes's Rule does not restrict the belief Prob(Friend|Give) at all. That belief must be pessimistic enough that the receiver prefers the payoff of 0 from rejecting a gift to the expected payoff of   from accepting, so the requirement that the receiver's strategy maximize his expected payoff given his beliefs necessitates that Prob(Friend|Give)  On the other hand, Prob(Friend|Not give) = p is required by Bayes's Rule, since both types take that action and it is uninformative about the sender's type.

If  , a second pooling equilibrium exists as well as Equilibrium 1, based on different beliefs:

Equilibrium 2. Sender: Give, whether they are the friend type or the enemy type. Receiver: Accept, with the beliefs that Prob(Friend|Give) = p and Prob(Friend|Not give) = x, choosing any value for  

The sender prefers the payoff of 1 from giving to the payoff of 0 from not giving, expecting that his gift will be accepted. In equilibrium, Bayes's Rule requires the receiver to have the belief Prob(Friend|Give) = p, since both types take that action and it is uninformative about the sender's type in this equilibrium. The out-of-equilibrium belief does not matter, since the sender would not want to deviate to Not give no matter what response the receiver would have.

Equilibrium 1 is perverse if   The game could have   so the sender is very likely a friend, but the receiver still would refuse any gift because he thinks enemies are much more likely than friends to give gifts. This shows how pessimistic beliefs can result in an equilibrium bad for both players, one that is not Pareto efficient. These beliefs seem unrealistic, though, and game theorists are often willing to reject some perfect Bayesian equilibria as implausible.

Equilibria 1 and 2 are the only equilibria that might exist, but we can also check for the two potential separating equilibria, in which the two types of sender choose different actions, and see why they do not exist as perfect Bayesian equilibria:

  1. Suppose the sender's strategy is: Give if a friend, Do not give if an enemy. The receiver's beliefs are updated accordingly: if he receives a gift, he believes the sender is a friend; otherwise, he believes the sender is an enemy. Thus, the receiver will respond with Accept. If the receiver chooses Accept, though, the enemy sender will deviate to  Give, to increase his payoff from 0 to 1, so this cannot be an equilibrium.
  2. Suppose the sender's strategy is: Do not give if a friend, Give if an enemy. The receiver's beliefs are updated accordingly: if he receives a gift, he believes the sender is an enemy; otherwise, he believes the sender is a friend. The receiver's best-response strategy is Reject. If the receiver chooses Reject, though, the enemy sender will deviate to  Do not give, to increase his payoff from -1 to 0, so this cannot be an equilibrium.

We conclude that in this game, there is no separating equilibrium.

Gift game 2

edit

In the following example,[1] the set of PBEs is strictly smaller than the set of SPEs and BNEs. It is a variant of the above gift-game, with the following change to the receiver's utility:

  • If the sender is a friend, then the receiver's utility is 1 (if they accept) or 0 (if they reject).
  • If the sender is an enemy, then the receiver's utility is 0 (if they accept) or -1 (if they reject).

Note that in this variant, accepting is a weakly dominant strategy for the receiver.

Similarly to example 1, there is no separating equilibrium. Let's look at the following potential pooling equilibria:

  1. The sender's strategy is: always give. The receiver's beliefs are not updated: they still believe in the a-priori probability, that the sender is a friend with probability   and an enemy with probability  . Their payoff from accepting is always higher than from rejecting, so they accept (regardless of the value of  ). This is a PBE - it is a best-response for both sender and receiver.
  2. The sender's strategy is: never give. Suppose the receiver's beliefs when receiving a gift is that the sender is a friend with probability  , where   is any number in  . Regardless of  , the receiver's optimal strategy is: accept. This is NOT a PBE, since the sender can improve their payoff from 0 to 1 by giving a gift.
  3. The sender's strategy is: never give, and the receiver's strategy is: reject. This is NOT a PBE, since for any belief of the receiver, rejecting is not a best-response.

Note that option 3 is a Nash equilibrium. If we ignore beliefs, then rejecting can be considered a best-response for the receiver, since it does not affect their payoff (since there is no gift anyway). Moreover, option 3 is even a SPE, since the only subgame here is the entire game. Such implausible equilibria might arise also in games with complete information, but they may be eliminated by applying subgame perfect Nash equilibrium. However, Bayesian games often contain non-singleton information sets and since subgames must contain complete information sets, sometimes there is only one subgame—the entire game—and so every Nash equilibrium is trivially subgame perfect. Even if a game does have more than one subgame, the inability of subgame perfection to cut through information sets can result in implausible equilibria not being eliminated.

To summarize: in this variant of the gift game, there are two SPEs: either the sender always gives and the receiver always accepts, or the sender always does not give and the receiver always rejects. From these, only the first one is a PBE; the other is not a PBE since it cannot be supported by any belief-system.

More examples

edit

For further examples, see signaling game#Examples. See also [2] for more examples. There is a recent application of this concept in Poker, by Loriente and Diez (2023).[3]

PBE in multi-stage games

edit

A multi-stage game is a sequence of simultaneous games played one after the other. These games may be identical (as in repeated games) or different.

Repeated public-good game

edit
Build Don't
Build 1-C1, 1-C2 1-C1, 1
Don't 1, 1-C2 0,0
Public good game

The following game[4]: section 6.2  is a simple representation of the free-rider problem. There are two players, each of whom can either build a public good or not build. Each player gains 1 if the public good is built and 0 if not; in addition, if player   builds the public good, they have to pay a cost of  . The costs are private information - each player knows their own cost but not the other's cost. It is only known that each cost is drawn independently at random from some probability distribution. This makes this game a Bayesian game.

In the one-stage game, each player builds if-and-only-if their cost is smaller than their expected gain from building. The expected gain from building is exactly 1 times the probability that the other player does NOT build. In equilibrium, for every player  , there is a threshold cost  , such that the player contributes if-and-only-if their cost is less than  . This threshold cost can be calculated based on the probability distribution of the players' costs. For example, if the costs are distributed uniformly on  , then there is a symmetric equilibrium in which the threshold cost of both players is 2/3. This means that a player whose cost is between 2/3 and 1 will not contribute, even though their cost is below the benefit, because of the possibility that the other player will contribute.

Now, suppose that this game is repeated two times.[4]: section 8.2.3  The two plays are independent, i.e., each day the players decide simultaneously whether to build a public good in that day, get a payoff of 1 if the good is built in that day, and pay their cost if they built in that day. The only connection between the games is that, by playing in the first day, the players may reveal some information about their costs, and this information might affect the play in the second day.

We are looking for a symmetric PBE. Denote by   the threshold cost of both players in day 1 (so in day 1, each player builds if-and-only-if their cost is at most  ). To calculate  , we work backwards and analyze the players' actions in day 2. Their actions depend on the history (= the two actions in day 1), and there are three options:

  1. In day 1, no player built. So now both players know that their opponent's cost is above  . They update their belief accordingly, and conclude that there is a smaller chance that their opponent will build in day 2. Therefore, they increase their threshold cost, and the threshold cost in day 2 is  .
  2. In day 1, both players built. So now both players know that their opponent's cost is below  . They update their belief accordingly, and conclude that there is a larger chance that their opponent will build in day 2. Therefore, they decrease their threshold cost, and the threshold cost in day 2 is  .
  3. In day 1, exactly one player built; suppose it is player 1. So now, it is known that the cost of player 1 is below   and the cost of player 2 is above  . There is an equilibrium in which the actions in day 2 are identical to the actions in day 1 - player 1 builds and player 2 does not build.

It is possible to calculate the expected payoff of the "threshold player" (a player with cost exactly  ) in each of these situations. Since the threshold player should be indifferent between contributing and not contributing, it is possible to calculate the day-1 threshold cost  . It turns out that this threshold is lower than   - the threshold in the one-stage game. This means that, in a two-stage game, the players are less willing to build than in the one-stage game. Intuitively, the reason is that, when a player does not contribute in the first day, they make the other player believe their cost is high, and this makes the other player more willing to contribute in the second day.

Jump-bidding

edit

In an open-outcry English auction, the bidders can raise the current price in small steps (e.g. in $1 each time). However, often there is jump bidding - some bidders raise the current price much more than the minimal increment. One explanation to this is that it serves as a signal to the other bidders. There is a PBE in which each bidder jumps if-and-only-if their value is above a certain threshold. See Jump bidding#signaling.

See also

edit

References

edit
  1. ^ James Peck. "Perfect Bayesian Equilibrium" (PDF). Ohio State University. Retrieved 6 December 2021.
  2. ^ Zack Grossman. "Perfect Bayesian Equilibrium" (PDF). University of California. Retrieved 2 September 2016.
  3. ^ Loriente, Martín Iñaki & Diez, Juan Cruz (2023). "Perfect Bayesian Equilibrium in Kuhn Poker". Universidad de San Andres.
  4. ^ a b Fudenberg, Drew; Tirole, Jean (1991). Game Theory. Cambridge, Massachusetts: MIT Press. ISBN 9780262061414. Book preview.