Wednesday, August 2, 2017

Game Theory

Basic Concepts of Game Theory
Motivating Example
Location Game: setting shop on a beach

On a linear beach, there are two vendors, they charge the same price. Where should the vendors locate their shops?
- in the center, near each other

what if there are three vendors?
- 3 vendors at the same spot, each get 1/3, one vendor moves, it gets more profit
- 3 vendors different positions, one vendor moves to center, it gets more profit
- no equilibrium

Information
Mutual Knowledge vs Common Knowledge
Mutual Knowledge: all players know A
Common Knowledge: everyone knows that everyone knows A

Perfect Information vs Imperfect Information
Perfect Information: the player knows the full history of the game so far
Imperfect Information:  the player does not know parts of the history of the game, such as sealed bid auction

Complete Information vs Incomplete Information
Complete Informatio: the player knows the type of other players and rules of the game
Incomplete Information: the incumbent does not know the true type of entrants

Perfect but Incomplete Information Game
- Price negotiation over used car at a dealer shop

Action vs Strategy
Bill has 5 actions and 6 strategies

Normal Form Game (Strategic Form Game)
- simultaneous game
- static setting
- represented by game matrix

Prisoner's Dilemma Game

      |     C          D
---------------------------
C   | -8, -8     -2, -15
D   | -15, -2   -3, -3

conditions
- each player has dominant strategy
- dominant strategy equilibrium (-8,-8) worse than optimal choice, dominant strategy equilibrium should be pareto inefficient to at least some other outcome (-3,-3)

how to escape from prisoner's dilemma
- price leadership
- price signaling
- focal points
- info agglomeration: online price agglomeration could intensify or mitigate price wars. However, you lower the price, competitor can see it immediately, and copy the rpice, so it is not worth it to lower the price.
- commitment

strictly dominant strategy
u(si, s-i) > u(si, s-i) for all si
weakly dominant strategy
u(si, s-i) >= u(si, s-i) for all si   and
u(si, s-i) > u(si, s-i) for some si

Iterated Dominance Equilibrium
- dominated strategy, strategies that will not be played
- eliminate strictly dominated strategy
- eliminate weakly dominated strategy, iterated weak dominance is not robust

Maximin Strategy Equilibrium
- choose the strategy that gives you a max payoff among the min payoff from each strategy

      |       L          R                  payoff
---------------------------
T   |    10, 4     8, 15                 8
B   | -100, 5   20, 10               -100
payoff     4        10

Nash Equilibrium
(x*, y*) is a NE if
- x* is best choice given 2's choice of y*
- y* is best choice given 1's choice of x*

coordination game
      |    S         R
---------------------------
S   | 5, 5     0, 1
R   | 1, 0    1, 1

anti-coordination game
      |    S            R
---------------------------
S   | -5,- 5     10, 20
R   | 20, 10    -3, -3

if multiple NEs:
1. use focal points
- cultural convention
- social convention
- common perception

2. use risk dominance
if (v>=1),(v>=1) at least one of them is strict inequality, then (S,S) payoff dominate (R,R)
      |    S       R
---------------------------
S   |  v,v     0, 1
R   | 1, 0    1, 1
(v-1) > (1-0)
(S,S) risk dominate (R,R)

Mixed Strategy Nash Equilibrium
- assign probabilities to pure strategies
                                 F
                |   DL (q)     CC (1-q)    if Nadal choose
   -------------------------------
  (p) DL   |  50,50       80, 20          DL with prob p
N ------------------------------
(1-p) CC  |  90,10 .     20,80          CC with prob 1-p
                                                   so Federer payoff is
if Federer chooses                   50p + 10(1-p) = 20p +80(1-p) => p=0.7
DL with prob q                         if p> 0.7, q = 1
CC with prob 1-q                         p < 0.7 , q =0
so Nadal's payoff is                    p=0.7 , indiff over q
50q + 80(1-q) = 90q + 20(1-q)
    q = 0.6
if q<0.6, p =1
  q > 0.6, p = 0
  q = 0.6, indifference over p

NE (p*, 1-p*) = (0.7, 0.3)
      (q*, 1-q*) = (0.6, 0.4)
implications of mixed strategy NE
- each player should mix his pure strategy so that the other player is indifferent among all his pure strategy
 choose (p,1-p) so that UDL = UCC
 choose (q,1-q) so that UDL = UCC
- assign zero prob to dominated pure strategy
- randomise just right, avoid outguessed by opponent
- each player mix his pure strategies so that the other player is indifferent among all his pure strategies

Oligopoly Games
                       |   compete on quantity      compete on price
------------------------------------------------------------------------
simultaneous  |     Cournot                          Bertrand
(static)            |
                       |    Cournot-                         Bertrand-
sequential       |  Stackelberg                      Stackelberg
substitute: q1 up -> q2 down
collusion outcome less than NE outcome

Application of simultaneous games
Tragedy of the commons
horizontal axis: % of car commuters
vertical axis: payoff for commuters
NE : (q, 1-q) 
- q% commute in cars and (1-q)% in busses
still not socially efficient
- all commuting by busses is still Pareto efficient

Sequential Games (Extensive Form Game)
- dynamic setting
- backward induction: for finite dynamic games, start from last stage of the game, not for infinite game
- subgame perfect NE: rule out NE of non credible threat, for finite and infinite game

Sequential Bargaining
š¯›æ > 50% , first mover adv              ; agreement reached in first round of bargaining
š¯›æ < 50%,  second mover adv
0< š¯›æ <100%,  š¯›æ is time value

Subgame Perfect Equilibrium
Example 1
SPNE outcome (0, 4)
SPNE strategy:
If B choose R, A will choose (5, -1), so B will choose L, but A will not choose R, A choose L, B choose R, so (0,4) is SPNE outcome.
Strategic Moves
To solve empty promise problem, make 5 worse than 4, cut (5,-1) branch, make 4 better than 5

Example 2
In game theory, having fewer option may be better, because you can manipulate the other player's choices so that outcome is better for you.

Strategic Moves
to influence opponents expectation about your action
to get around prisoner' dilemma
introduced by Thomas Schelling
- cross shareholding
- MFC clause (mutual adoption, 2 period model)
- price matching guarantee policy (mutual adoption)
- entry deterrence:
   -- side payment, merge, build a reputation, invest in extra capacity

chicken game
- two gangsters race their cars toward each other , the first one to chicken out loses.

G                   Gang B
a        | Straight          Avoid
n   --------------------------------
g   S   | -100,- 100      10, -2
A   A   | -2, 10               0, 0

- you don't know how to secure the equilibrium that is favorable to you, because they are two NE
- so you use strategic moves to gain advantage
- commitment, play aggressively, scare your opponents

Entry deterrence
- incumbent facing a potential entrant
- entrant moves first, incumbent moves later
- the latter can behave strategically to deter the entry

I                   Entrant
n        | Enter          Stay out
c    --------------------------------
u    E   | 100,20      200, 0
m   S   |  70, -10      130, 0
bent

the NE is (100,20), it is incumbent dominant strategy, but potential entrant will enter

a few options:
- side payment, illegal?
- marge, anti-competitive ?
- build a reputation for being irrational, manipulate rivals choices to your advantage
(incumbent can increase 70 to > 100, or decrease 100 to be < 70)











the SPNE is (100,20)





in chicken game, sequential game, use strategic moves to show commitment and gain advantage

Fudenberg Tirole Taxanomy

Your
rival's
actions
Your firm commitment


Tough soft
Strategic complement You commit tough, your rival tough too
(puppy dog)
You commit soft
your rival soft too
(fat cat)
Strategic substitutes You commit tough, your rival soft
(top dog)
You commit soft
your rival tough
(lean & hungry look)


How to apply
step 1: calculate your profit as a function of what the other players might do
 Ļ€you = f(others actions)
step 2: guess your competitor's profits as a function of what you might do
 Ļ€others = f(your actions)
step 3: can legally cooperate?
if yes, use cooperative game theory
if no, use non-cooperative game theory
step 4: create the game's payoff
step 5: pick the game strategies
step 6:strategic moves
step 7: make the moves

(to be continued.)