Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Common Learning Martin Cripps, Jeff Ely, George Mailath, Larry Samuelson UCL, Northwestern, Yale, Wisconsin

November 2006

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

A Reputation Problem Consider a game in which you are trying to establish a reputation but players do not directly observe your actions. There is a private signal correlated with the actions you take. You don’t see how players in the game revise their beliefs. ⇒ Reputation is Private. I know I eventually lose my reputation — my opponents learn my type. Is there a time at which all players in the game know and know that everyone else knows ... I have lost my reputation?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

A Reputation Problem Consider a game in which you are trying to establish a reputation but players do not directly observe your actions. There is a private signal correlated with the actions you take. You don’t see how players in the game revise their beliefs. ⇒ Reputation is Private. I know I eventually lose my reputation — my opponents learn my type. Is there a time at which all players in the game know and know that everyone else knows ... I have lost my reputation?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

A Reputation Problem Consider a game in which you are trying to establish a reputation but players do not directly observe your actions. There is a private signal correlated with the actions you take. You don’t see how players in the game revise their beliefs. ⇒ Reputation is Private. I know I eventually lose my reputation — my opponents learn my type. Is there a time at which all players in the game know and know that everyone else knows ... I have lost my reputation?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

A Reputation Problem Consider a game in which you are trying to establish a reputation but players do not directly observe your actions. There is a private signal correlated with the actions you take. You don’t see how players in the game revise their beliefs. ⇒ Reputation is Private. I know I eventually lose my reputation — my opponents learn my type. Is there a time at which all players in the game know and know that everyone else knows ... I have lost my reputation?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

A Reputation Problem Consider a game in which you are trying to establish a reputation but players do not directly observe your actions. There is a private signal correlated with the actions you take. You don’t see how players in the game revise their beliefs. ⇒ Reputation is Private. I know I eventually lose my reputation — my opponents learn my type. Is there a time at which all players in the game know and know that everyone else knows ... I have lost my reputation?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

A Folk Theorem Problem Consider a repeated game in which the payoff matrix is unknown. Each player gets private signals about the game from the payoffs he/she receives. The players learn about which payoff matrix they are playing. They don’t see how much the other players have learnt. Is there a time at which all players in the game know and know that everyone else knows ... which game they are playing?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

A Folk Theorem Problem Consider a repeated game in which the payoff matrix is unknown. Each player gets private signals about the game from the payoffs he/she receives. The players learn about which payoff matrix they are playing. They don’t see how much the other players have learnt. Is there a time at which all players in the game know and know that everyone else knows ... which game they are playing?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

A Folk Theorem Problem Consider a repeated game in which the payoff matrix is unknown. Each player gets private signals about the game from the payoffs he/she receives. The players learn about which payoff matrix they are playing. They don’t see how much the other players have learnt. Is there a time at which all players in the game know and know that everyone else knows ... which game they are playing?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Research Question (Informally) Model – Description There is an unknown parameter with two possible values θ or θ0 . There are 2 agents. As time passes the agents observe parameter-dependent signals. Suppose the signals are informative and the agents asymptotically do learn the state. The above is common knowledge. Is it the case that there must be a time at which the parameter is approximate common knowledge?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Research Question (Informally) Model – Description There is an unknown parameter with two possible values θ or θ0 . There are 2 agents. As time passes the agents observe parameter-dependent signals. Suppose the signals are informative and the agents asymptotically do learn the state. The above is common knowledge. Is it the case that there must be a time at which the parameter is approximate common knowledge?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Research Question (Informally) Model – Description There is an unknown parameter with two possible values θ or θ0 . There are 2 agents. As time passes the agents observe parameter-dependent signals. Suppose the signals are informative and the agents asymptotically do learn the state. The above is common knowledge. Is it the case that there must be a time at which the parameter is approximate common knowledge?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Research Question (Informally) Model – Description There is an unknown parameter with two possible values θ or θ0 . There are 2 agents. As time passes the agents observe parameter-dependent signals. Suppose the signals are informative and the agents asymptotically do learn the state. The above is common knowledge. Is it the case that there must be a time at which the parameter is approximate common knowledge?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Research Question (Informally) Model – Description There is an unknown parameter with two possible values θ or θ0 . There are 2 agents. As time passes the agents observe parameter-dependent signals. Suppose the signals are informative and the agents asymptotically do learn the state. The above is common knowledge. Is it the case that there must be a time at which the parameter is approximate common knowledge?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Research Question (Informally) Model – Description There is an unknown parameter with two possible values θ or θ0 . There are 2 agents. As time passes the agents observe parameter-dependent signals. Suppose the signals are informative and the agents asymptotically do learn the state. The above is common knowledge. Is it the case that there must be a time at which the parameter is approximate common knowledge?

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Model Model – Notation Time t = 0, 1, 2, .... 0 Parameter, or Model, θ or θ0 with priors (p θ , p θ ). The stochastic process x θ ≡ {xtθ }∞ t=0 is conditional on θ. 2 agents. Set of states of the world Ω = {θ, θ0 } × {xtθ }∞ t=0 . Private signals xtθ ≡ (it , jt ) ∈ I × J (finite sets). Let h`t denote agent `’s history of signals before period t. Priors p`t (θ) ≡ E [1θ | h`t ]

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Model Model – Notation Time t = 0, 1, 2, .... 0 Parameter, or Model, θ or θ0 with priors (p θ , p θ ). The stochastic process x θ ≡ {xtθ }∞ t=0 is conditional on θ. 2 agents. Set of states of the world Ω = {θ, θ0 } × {xtθ }∞ t=0 . Private signals xtθ ≡ (it , jt ) ∈ I × J (finite sets). Let h`t denote agent `’s history of signals before period t. Priors p`t (θ) ≡ E [1θ | h`t ]

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Model Model – Notation Time t = 0, 1, 2, .... 0 Parameter, or Model, θ or θ0 with priors (p θ , p θ ). The stochastic process x θ ≡ {xtθ }∞ t=0 is conditional on θ. 2 agents. Set of states of the world Ω = {θ, θ0 } × {xtθ }∞ t=0 . Private signals xtθ ≡ (it , jt ) ∈ I × J (finite sets). Let h`t denote agent `’s history of signals before period t. Priors p`t (θ) ≡ E [1θ | h`t ]

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Model Model – Notation Time t = 0, 1, 2, .... 0 Parameter, or Model, θ or θ0 with priors (p θ , p θ ). The stochastic process x θ ≡ {xtθ }∞ t=0 is conditional on θ. 2 agents. Set of states of the world Ω = {θ, θ0 } × {xtθ }∞ t=0 . Private signals xtθ ≡ (it , jt ) ∈ I × J (finite sets). Let h`t denote agent `’s history of signals before period t. Priors p`t (θ) ≡ E [1θ | h`t ]

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Model Model – Notation Time t = 0, 1, 2, .... 0 Parameter, or Model, θ or θ0 with priors (p θ , p θ ). The stochastic process x θ ≡ {xtθ }∞ t=0 is conditional on θ. 2 agents. Set of states of the world Ω = {θ, θ0 } × {xtθ }∞ t=0 . Private signals xtθ ≡ (it , jt ) ∈ I × J (finite sets). Let h`t denote agent `’s history of signals before period t. Priors p`t (θ) ≡ E [1θ | h`t ]

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Model Model – Notation Time t = 0, 1, 2, .... 0 Parameter, or Model, θ or θ0 with priors (p θ , p θ ). The stochastic process x θ ≡ {xtθ }∞ t=0 is conditional on θ. 2 agents. Set of states of the world Ω = {θ, θ0 } × {xtθ }∞ t=0 . Private signals xtθ ≡ (it , jt ) ∈ I × J (finite sets). Let h`t denote agent `’s history of signals before period t. Priors p`t (θ) ≡ E [1θ | h`t ]

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Model Model – Notation Time t = 0, 1, 2, .... 0 Parameter, or Model, θ or θ0 with priors (p θ , p θ ). The stochastic process x θ ≡ {xtθ }∞ t=0 is conditional on θ. 2 agents. Set of states of the world Ω = {θ, θ0 } × {xtθ }∞ t=0 . Private signals xtθ ≡ (it , jt ) ∈ I × J (finite sets). Let h`t denote agent `’s history of signals before period t. Priors p`t (θ) ≡ E [1θ | h`t ]

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Model Model – Notation Time t = 0, 1, 2, .... 0 Parameter, or Model, θ or θ0 with priors (p θ , p θ ). The stochastic process x θ ≡ {xtθ }∞ t=0 is conditional on θ. 2 agents. Set of states of the world Ω = {θ, θ0 } × {xtθ }∞ t=0 . Private signals xtθ ≡ (it , jt ) ∈ I × J (finite sets). Let h`t denote agent `’s history of signals before period t. Priors p`t (θ) ≡ E [1θ | h`t ]

Introduction

Common Learning

Indep and Correlation

The Result

Objective

θ Suppose pnt → 1 P θ almost surely, that is, priors converge to knowledge of the parameter. What extra is necessary for common learning?

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Objective

θ Suppose pnt → 1 P θ almost surely, that is, priors converge to knowledge of the parameter. What extra is necessary for common learning?

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Common Belief of Event F

Set of States of the World

Event F

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Common Belief of Event F

Set of States of the World

Bp1(F)

F Agent 1 attaches probability p to F.

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Common Belief of Event F

Set of States of the World

Agent 2 attaches probability p to F.

Bp2 (F) F

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Common Belief of Event F

Set of States of the World

Bp1(F)

Bp2 (F)

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Common Belief of Event F

Set of States of the World

Bp1(F)

Bp2 (F) Event F F is p Believed = Bp (F)

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Common Belief of Event F

Set of States of the World

F Bp(F)

is p believed = Bp[Bp(F)] Bp (F)

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Common Belief of Event F

Set of States of the World

F Bp[Bp(F)] Bp (F) F is common p belief

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Common p Belief Approximate Common Knowledge Given any event F ⊂ Ω. E [1F | h`t ] is the probability agent ` attaches to the event F given her information at time t. Define the set of states of the world where agent ` believes F occurred with at least probability q: q Bnt (F ) ≡ { ω ∈ Ω | E [1F | h`t ] > q }.

Define the set of states where F is q-believed: \ q Btq (F ) ≡ B`t (F ). n

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Common p Belief Approximate Common Knowledge Given any event F ⊂ Ω. E [1F | h`t ] is the probability agent ` attaches to the event F given her information at time t. Define the set of states of the world where agent ` believes F occurred with at least probability q: q Bnt (F ) ≡ { ω ∈ Ω | E [1F | h`t ] > q }.

Define the set of states where F is q-believed: \ q Btq (F ) ≡ B`t (F ). n

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Common p Belief Approximate Common Knowledge Given any event F ⊂ Ω. E [1F | h`t ] is the probability agent ` attaches to the event F given her information at time t. Define the set of states of the world where agent ` believes F occurred with at least probability q: q Bnt (F ) ≡ { ω ∈ Ω | E [1F | h`t ] > q }.

Define the set of states where F is q-believed: \ q Btq (F ) ≡ B`t (F ). n

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Common p Belief Approximate Common Knowledge Given any event F ⊂ Ω. E [1F | h`t ] is the probability agent ` attaches to the event F given her information at time t. Define the set of states of the world where agent ` believes F occurred with at least probability q: q Bnt (F ) ≡ { ω ∈ Ω | E [1F | h`t ] > q }.

Define the set of states where F is q-believed: \ q Btq (F ) ≡ B`t (F ). n

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Common p Belief Common q-belief F is common q-belief at date t on the event k

Ctq (F ) ≡ Btq (F ) ∩ Btq (Btq (F )) ∩ .... ∩ [Btq ] (F ) ∩ ...

(1)

On Ctq (F ), the event F is q-believed and this event is itself q believed and so on. We write Ctq (θ), the event θ is common q-belief at time t.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Common Learning Definition The agents commonly learn parameter θ if for each q there exists a T such that for all t > T , P θ (Ctq (θ)) ≥ q.

Common Learning The agents commonly learn θ implies individual learning as B`tq (θ) ⊂ Ctq (θ). There is a time where all levels of belief are above q not just the probability the player attaches to the parameter.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Common Learning Definition The agents commonly learn parameter θ if for each q there exists a T such that for all t > T , P θ (Ctq (θ)) ≥ q.

Common Learning The agents commonly learn θ implies individual learning as B`tq (θ) ⊂ Ctq (θ). There is a time where all levels of belief are above q not just the probability the player attaches to the parameter.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Common Learning Definition The agents commonly learn parameter θ if for each q there exists a T such that for all t > T , P θ (Ctq (θ)) ≥ q.

Common Learning The agents commonly learn θ implies individual learning as B`tq (θ) ⊂ Ctq (θ). There is a time where all levels of belief are above q not just the probability the player attaches to the parameter.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

How to Recognize Common p-Belief Events F is q-evident if all players attach at least probability q to it when it is true, that is, F ⊂ Btq (F ).

Bp (F)

F

Monderer and Samet 1989 θ is common q-belief on the event F at time t, iff: F is q-evident at time t, AND F ⊂ Btq ({θ}).

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

How to Recognize Common p-Belief Events F is q-evident if all players attach at least probability q to it when it is true, that is, F ⊂ Btq (F ).

Bp (F)

F

Monderer and Samet 1989 θ is common q-belief on the event F at time t, iff: F is q-evident at time t, AND F ⊂ Btq ({θ}).

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

How to Recognize Common p-Belief Events F is q-evident if all players attach at least probability q to it when it is true, that is, F ⊂ Btq (F ).

Bp (F)

F

Monderer and Samet 1989 θ is common q-belief on the event F at time t, iff: F is q-evident at time t, AND F ⊂ Btq ({θ}).

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Recipe to establish common p belief (1) Find an event F that occurs under parameter θ. (2) Show that for every state of the world in the event F every player believes F occurred with probability at least p.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Recipe to establish common p belief (1) Find an event F that occurs under parameter θ. (2) Show that for every state of the world in the event F every player believes F occurred with probability at least p.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Perfectly Correlated Signals. If agents observe the same information p1t (θ) = p2t (θ) and they always have the same belief. Private learning implies common learning.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Independence: Player A Learns

Individual Learning

States where A Pr(θ)> √q

θ

time t

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Independence: Both Players Learn

Individual Learning

States where A Pr(θ)> √q

θ

Independence Conditional on θ

States where B Pr(θ)>√q

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Only Know The Other Guy Learns (Given θ)

States where A Pr =√q

θ

Pr(θ)> √q

Independence

States where B Pr(θ)>√q Pr =√q Pr =√q x √q

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

When the Signals do not Satisfy these Conditions: In general beliefs will not move in the same direction. It is possible that player 1 revises their belief in θ upwards but is almost certain player 2 revised their beliefs in the reverse direction. There is a counter example to common learning.

Counterexample This will be deferred until later. The structure has i.i.d. signals It could be interpreted as a repeated Rubinstein email game.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

When the Signals do not Satisfy these Conditions: In general beliefs will not move in the same direction. It is possible that player 1 revises their belief in θ upwards but is almost certain player 2 revised their beliefs in the reverse direction. There is a counter example to common learning.

Counterexample This will be deferred until later. The structure has i.i.d. signals It could be interpreted as a repeated Rubinstein email game.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

When the Signals do not Satisfy these Conditions: In general beliefs will not move in the same direction. It is possible that player 1 revises their belief in θ upwards but is almost certain player 2 revised their beliefs in the reverse direction. There is a counter example to common learning.

Counterexample This will be deferred until later. The structure has i.i.d. signals It could be interpreted as a repeated Rubinstein email game.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

When the Signals do not Satisfy these Conditions: In general beliefs will not move in the same direction. It is possible that player 1 revises their belief in θ upwards but is almost certain player 2 revised their beliefs in the reverse direction. There is a counter example to common learning.

Counterexample This will be deferred until later. The structure has i.i.d. signals It could be interpreted as a repeated Rubinstein email game.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Result: Agent 1 has a finite private signal set I . Agent 2 has a finite private signal set J. Under parameter θ signal profiles (it , jt ) in period t have probability πijθ . θ

Signal profiles are drawn i.i.d. from π in each period. Result Under these conditions we will have common learning if the agents privately learn.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Result: Agent 1 has a finite private signal set I . Agent 2 has a finite private signal set J. Under parameter θ signal profiles (it , jt ) in period t have probability πijθ . θ

Signal profiles are drawn i.i.d. from π in each period. Result Under these conditions we will have common learning if the agents privately learn.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Individual Learning The below conditions are necessary and sufficient for individual learning: P φθ (i) = j πθij , agent 1’s marginal signal distribution. (φθ ) P ψθ (j) = i πθij , agent 2’s marginal signal distribution. (ψθ )

Likelihood Ratios There exists a signal i and a signal j such that φθ (i) 6= 1, φθ0 (i)

ψθ (j) 6= 1. ψθ0 (j)

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Individual Learning The below conditions are necessary and sufficient for individual learning: P φθ (i) = j πθij , agent 1’s marginal signal distribution. (φθ ) P ψθ (j) = i πθij , agent 2’s marginal signal distribution. (ψθ )

Likelihood Ratios There exists a signal i and a signal j such that φθ (i) 6= 1, φθ0 (i)

ψθ (j) 6= 1. ψθ0 (j)

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Space of Frequencies/Empirical Measures True probabilities under θ

φθ

Space of 1’s frequencies φ

ψθ

Space of 2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

The Event Event requires that observed frequencies lie close to true Probabilities

φθ

1’s frequencies φ

ψθ

2’s frequencies ψ

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Frequencies are Enough: Formally

φˆt is the empirical measure of 1’s signals at time t. ψˆt is the empirical measure of 2’s signals at time t. The event we aim to show is q evident is Dδ ≡ {θ} ∩ {kφˆt − φθ kTV < δ} ∩ {kψˆt − ψθ kTV < δ} k · kTV is the total variation norm.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

θ is Believed on this Event. The loglikelihood is Λ1t ≡ log

Pt (θ|h1t ) 1 − p θ . 1 − Pt (θ|h1t ) p θ

Result The rate Λ1t → ∞ is bounded uniformly if frequencies are close to true probabilities: |Λ1t − tH(φθ |φθ0 )| ≤ tkφˆt − φθ kTV b. Where H(φθ |φθ0 ) > 0 is the relative entropy of 1’s signals under the 2 parameters.   it  X  i  φθ φθ i θ H(φθ |φθ0 ) ≡ E log = φθ log . it i φ φθ0 0 θ i

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

θ is Believed on this Event. The loglikelihood is Λ1t ≡ log

Pt (θ|h1t ) 1 − p θ . 1 − Pt (θ|h1t ) p θ

Result The rate Λ1t → ∞ is bounded uniformly if frequencies are close to true probabilities: |Λ1t − tH(φθ |φθ0 )| ≤ tkφˆt − φθ kTV b. Where H(φθ |φθ0 ) > 0 is the relative entropy of 1’s signals under the 2 parameters.   it  X  i  φθ φθ i θ H(φθ |φθ0 ) ≡ E log = φθ log . it i φ φθ0 0 θ i

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

θ is Believed on this Event. The loglikelihood is Λ1t ≡ log

Pt (θ|h1t ) 1 − p θ . 1 − Pt (θ|h1t ) p θ

Result The rate Λ1t → ∞ is bounded uniformly if frequencies are close to true probabilities: |Λ1t − tH(φθ |φθ0 )| ≤ tkφˆt − φθ kTV b. Where H(φθ |φθ0 ) > 0 is the relative entropy of 1’s signals under the 2 parameters.   it  X  i  φθ φθ i θ H(φθ |φθ0 ) ≡ E log = φθ log . it i φ φθ0 0 θ i

Introduction

Common Learning

Indep and Correlation

The Result

Summary When Frequencies are in here Players Attach high probability to θ,

and Under θ frequencies are in here with high probability.

φθ

1’s frequencies φ

ψθ

2’s frequencies ψ

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Final Problem: If 1’s frequencies are in here: What does he believe about 2’s frequencies? What probability does he attach to both frequencies being close?

φθ

1’s frequencies φ

ψθ

2’s frequencies ψ

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Predicting Your Opponent’s Frequencies Definition Define M1 to be the (Markov) I × J matrix with elements mij1 ≡ {Pr(j|i)} and M2 to be the (Markov) I × J matrix with elements mij2 ≡ {Pr(i|j)}. Player 1’s best estimate of 2’s frequencies is φˆt M1 . Player 2’s best estimate of 1’s frequencies is ψˆt M2 .

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Predicting Your Opponent’s Frequencies Definition Define M1 to be the (Markov) I × J matrix with elements mij1 ≡ {Pr(j|i)} and M2 to be the (Markov) I × J matrix with elements mij2 ≡ {Pr(i|j)}. Player 1’s best estimate of 2’s frequencies is φˆt M1 . Player 2’s best estimate of 1’s frequencies is ψˆt M2 .

Introduction

Common Learning

Indep and Correlation

The Result

Prediction: If 1’s frequencies are in here: He predicts 2’s frequencies φM1

φM1

φθ

1’s frequencies φ

ψθ

2’s frequencies ψ

Counterexample

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Are Players Predicting Accuracy? Recall Player 1’s best estimate of 2’s frequencies is φˆt M1 . Player 2’s best estimate of 1’s frequencies is ψˆt M2 . The accuracy of these predictions (as t → ∞) is very good indeed. In fact we can use large deviations arguments to show, θ

 

ˆ

ˆ Pr φt M1 − ψt < ε|h1t > 1 − exp(−tK ).

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Prediction: Do 1’s predictions lie in our target set? If “no” then 1 cannot attach high probability to common belief. ||φΜ − φθ Μ || < ||φ − φθ || ||Μ||

||φ−φθ||

φM1

φθ

1’s frequencies φ

ψθ

φθ

M1

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Prediction: Do 1’s predictions lie in our target set? If “no” then 1 cannot attach high probability to common belief. ||φΜ − φθ Μ || < ||φ − φθ || ||Μ||

||φ−φθ||

φM1

φθ

1’s frequencies φ

ψθ

φθ

M1

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Prediction: Notice φθ M1 = ψ θ . Hence Prediction − ψ θ = φt M1 − φθ M1 ≤ φt − φθ kM1 k If M1 were a contraction 1’s predictions would be closer to ψ θ than 1’s observations. ||φΜ − φθ Μ || < ||φ − φθ || ||Μ||

||φ−φθ||

φM1

φθ

1’s frequencies φ

ψθ

φθ M1

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Prediction: Notice φθ M1 = ψ θ . Hence Prediction − ψ θ = φt M1 − φθ M1 ≤ φt − φθ kM1 k If M1 were a contraction 1’s predictions would be closer to ψ θ than 1’s observations. ||φΜ − φθ Μ || < ||φ − φθ || ||Μ||

||φ−φθ||

φM1

φθ

1’s frequencies φ

ψθ

φθ M1

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Prediction: Notice φθ M1 = ψ θ . Hence Prediction − ψ θ = φt M1 − φθ M1 ≤ φt − φθ kM1 k If M1 were a contraction 1’s predictions would be closer to ψ θ than 1’s observations. ||φΜ − φθ Μ || < ||φ − φθ || ||Μ||

||φ−φθ||

φM1

φθ

1’s frequencies φ

ψθ

φθ M1

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Prediction: Prediction − ψ θ = φt M1 − φθ M1 ≤ φt − φθ kM1 k Unfortunately kM1 k ≤ 1. ||φΜ − φθ Μ || < ||φ − φθ || ||Μ||

||φ−φθ||

φM1

φθ

1’s frequencies φ

ψθ

φθ

M1

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Prediction: Why not trim the set containing 1’s frequencies to ensure it lies strictly inside 2’s set?

φM1

φθ

1’s frequencies φ

ψθ

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Prediction: Then even when make a small prediction error still will be sure 2’s observations are in the target set.

φM1

φθ

1’s frequencies φ

ψθ

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

2’s Prediction: There is now a problem because we need to ensure that 2’s observations make 2’s predictions of 1’s behavior lie in this smaller set. 2’s predictions are given by a different linear map prediction = ψt M2

φθ

1’s frequencies φ

ψΜ2

ψθ

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

2’s Prediction: Hence we consider the map from 1’s predictions and back again. This map has φθ as a fixed point φθ M1 M2 = φθ .

φΜ1

φθ

ψθ

ψΜ2 1’s frequencies φ

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

2’s Prediction: Hence we consider the map from 1’s predictions and back again. This map has φθ as a fixed point φθ M1 M2 = φθ . φΜ1Μ2 φΜ1

φθ

ψθ

ψΜ2 1’s frequencies φ

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

2’s Prediction: Now restrict the domain even further so 2’s predictions lie in 1’s set.

φΜ1

φθ ψθ

φΜ1Μ2 1’s frequencies φ

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Can we stop restricting sets? NO! Don’t know φt M1 M2 is strictly inside our target set so do not know whether 2 will predict that 1 has beliefs in this set. φΜ1Μ2

φθ

1’s frequencies φ

ψθ

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Can we stop restricting sets? Things look bad — we will have to restrict sets forever. The limit will be empty.

However: φM1 M2 maps probabilities to probabilities. It is a Markov Transition on the Space of φ’s. (For this talk only) Assume it is an irredicible MC, hence there exists a t and λ < 1 such that.



φ(M1 M2 )t − φθ (M1 M2 )t ≤ λt φ − φθ

After enough iterations it will be a contraction!

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Can we stop restricting sets? Things look bad — we will have to restrict sets forever. The limit will be empty.

However: φM1 M2 maps probabilities to probabilities. It is a Markov Transition on the Space of φ’s. (For this talk only) Assume it is an irredicible MC, hence there exists a t and λ < 1 such that.



φ(M1 M2 )t − φθ (M1 M2 )t ≤ λt φ − φθ

After enough iterations it will be a contraction!

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Can we stop restricting sets? Things look bad — we will have to restrict sets forever. The limit will be empty.

However: φM1 M2 maps probabilities to probabilities. It is a Markov Transition on the Space of φ’s. (For this talk only) Assume it is an irredicible MC, hence there exists a t and λ < 1 such that.



φ(M1 M2 )t − φθ (M1 M2 )t ≤ λt φ − φθ

After enough iterations it will be a contraction!

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Can we stop restricting sets? Things look bad — we will have to restrict sets forever. The limit will be empty.

However: φM1 M2 maps probabilities to probabilities. It is a Markov Transition on the Space of φ’s. (For this talk only) Assume it is an irredicible MC, hence there exists a t and λ < 1 such that.



φ(M1 M2 )t − φθ (M1 M2 )t ≤ λt φ − φθ

After enough iterations it will be a contraction!

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Can we stop restricting sets? Things look bad — we will have to restrict sets forever. The limit will be empty.

However: φM1 M2 maps probabilities to probabilities. It is a Markov Transition on the Space of φ’s. (For this talk only) Assume it is an irredicible MC, hence there exists a t and λ < 1 such that.



φ(M1 M2 )t − φθ (M1 M2 )t ≤ λt φ − φθ

After enough iterations it will be a contraction!

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Can we stop restricting sets? Things look bad — we will have to restrict sets forever. The limit will be empty.

However: φM1 M2 maps probabilities to probabilities. It is a Markov Transition on the Space of φ’s. (For this talk only) Assume it is an irredicible MC, hence there exists a t and λ < 1 such that.



φ(M1 M2 )t − φθ (M1 M2 )t ≤ λt φ − φθ

After enough iterations it will be a contraction!

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Can we stop restricting sets? Things look bad — we will have to restrict sets forever. The limit will be empty.

However: φM1 M2 maps probabilities to probabilities. It is a Markov Transition on the Space of φ’s. (For this talk only) Assume it is an irredicible MC, hence there exists a t and λ < 1 such that.



φ(M1 M2 )t − φθ (M1 M2 )t ≤ λt φ − φθ

After enough iterations it will be a contraction!

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

The Picture If One Iteration is Enough φt M1 M2 is strictly inside the target set so 2 will predict that 1 has beliefs in this set even if there are errors. φΜ1Μ2

φθ

1’s frequencies φ

ψθ

2’s frequencies ψ

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

If One Iteration is Enough φΜ1Μ2

φθ

1’s frequencies φ

ψθ

2’s frequencies ψ

φt M1 M2 is strictly inside the target set. We can determine a set E of frequencies s.t: If (my frequencies)∈ E ⇒ (opponent’s frequencies)∈ E with high prob. And high probability to θ.

E is p-evident and is a subset of the event that θ occurs We are done!

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

If One Iteration is Enough φΜ1Μ2

φθ

1’s frequencies φ

ψθ

2’s frequencies ψ

φt M1 M2 is strictly inside the target set. We can determine a set E of frequencies s.t: If (my frequencies)∈ E ⇒ (opponent’s frequencies)∈ E with high prob. And high probability to θ.

E is p-evident and is a subset of the event that θ occurs We are done!

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

If One Iteration is Enough φΜ1Μ2

φθ

1’s frequencies φ

ψθ

2’s frequencies ψ

φt M1 M2 is strictly inside the target set. We can determine a set E of frequencies s.t: If (my frequencies)∈ E ⇒ (opponent’s frequencies)∈ E with high prob. And high probability to θ.

E is p-evident and is a subset of the event that θ occurs We are done!

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Counter Example Countably infinite set of i.i.d. signals (a repeated Rubinstein email game θ ∈ {θ0 , θ00 } and ε ∈ (0, 1). Probability θ ε(1 − θ) (1 − ε)ε(1 − θ) (1 − ε)2 ε(1 − θ) (1 − ε)3 ε(1 − θ) (1 − ε)4 ε(1 − θ) .. .

Player-1 signal Player-2 signal 0 0 1 0 1 1 2 1 2 2 3 2 .. .. . .

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Counter Example Countably infinite set of i.i.d. signals (a repeated Rubinstein email game θ ∈ {θ0 , θ00 } and ε ∈ (0, 1). Probability θ ε(1 − θ) (1 − ε)ε(1 − θ) (1 − ε)2 ε(1 − θ) (1 − ε)3 ε(1 − θ) (1 − ε)4 ε(1 − θ) .. .

Player-1 signal Player-2 signal 0 0 1 0 1 1 2 1 2 2 3 2 .. .. . .

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Counter Example Countably infinite set of i.i.d. signals (a repeated Rubinstein email game θ ∈ {θ0 , θ00 } and ε ∈ (0, 1). Probability θ ε(1 − θ) (1 − ε)ε(1 − θ) (1 − ε)2 ε(1 − θ) (1 − ε)3 ε(1 − θ) (1 − ε)4 ε(1 − θ) .. .

Player-1 signal Player-2 signal 0 0 1 0 1 1 2 1 2 2 3 2 .. .. . .

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Counter Example Signal Structure     

θ 0 0 ... ε(1 − θ) (1 − ε)ε(1 − θ) 0 ... 2 3 0 (1 − ε) ε(1 − θ) (1 − ε) ε(1 − θ) 0 ... ... 0 0 0

    

If we compute M1 M2 this gives a random walk that can either move up, move down or stay put. It is not a contraction. Takes arbitrarily long for the MC to converge to steady state if it begins at state T as T → ∞.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

How the Counter Example Works

1

2

3

4

An iterated belief argument ⇒ agent 1’s nth order belief attaches > 0 prob to agent 2 seeing a v.large signal. A very rare signal ⇒ it is also nth order belief that 2 has never seen a zero signal. If 2 has never seen a zero signal he must attach high probability to state θ00 < θ00 . Contradiction: It is nth order belief that 2 attaches high probability to state θ0 even in state θ00 !

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

How the Counter Example Works

1

2

3

4

An iterated belief argument ⇒ agent 1’s nth order belief attaches > 0 prob to agent 2 seeing a v.large signal. A very rare signal ⇒ it is also nth order belief that 2 has never seen a zero signal. If 2 has never seen a zero signal he must attach high probability to state θ00 < θ00 . Contradiction: It is nth order belief that 2 attaches high probability to state θ0 even in state θ00 !

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

How the Counter Example Works

1

2

3

4

An iterated belief argument ⇒ agent 1’s nth order belief attaches > 0 prob to agent 2 seeing a v.large signal. A very rare signal ⇒ it is also nth order belief that 2 has never seen a zero signal. If 2 has never seen a zero signal he must attach high probability to state θ00 < θ00 . Contradiction: It is nth order belief that 2 attaches high probability to state θ0 even in state θ00 !

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

How the Counter Example Works

1

2

3

4

An iterated belief argument ⇒ agent 1’s nth order belief attaches > 0 prob to agent 2 seeing a v.large signal. A very rare signal ⇒ it is also nth order belief that 2 has never seen a zero signal. If 2 has never seen a zero signal he must attach high probability to state θ00 < θ00 . Contradiction: It is nth order belief that 2 attaches high probability to state θ0 even in state θ00 !

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Step 2 If 2 has seen a k − 1, then he attaches Prob> 0 to 1 believing he has seen a k. Iterate: If 2 has seen a m then he believes that 1 believes (k − m times) he as seen a k. Second property: if player 2 has seen at least ` distinct signals of at least k and at least ` distinct signals of at least k 0 > k + 2n. Then, 2 believes that 1 believes (n times) he has seen at least ` distinct signals at least k + n and at least ` distinct signals greater than k 0 − n.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Step 2 If 2 has seen a k − 1, then he attaches Prob> 0 to 1 believing he has seen a k. Iterate: If 2 has seen a m then he believes that 1 believes (k − m times) he as seen a k. Second property: if player 2 has seen at least ` distinct signals of at least k and at least ` distinct signals of at least k 0 > k + 2n. Then, 2 believes that 1 believes (n times) he has seen at least ` distinct signals at least k + n and at least ` distinct signals greater than k 0 − n.

Introduction

Common Learning

Indep and Correlation

The Result

Counterexample

Step 2 If 2 has seen a k − 1, then he attaches Prob> 0 to 1 believing he has seen a k. Iterate: If 2 has seen a m then he believes that 1 believes (k − m times) he as seen a k. Second property: if player 2 has seen at least ` distinct signals of at least k and at least ` distinct signals of at least k 0 > k + 2n. Then, 2 believes that 1 believes (n times) he has seen at least ` distinct signals at least k + n and at least ` distinct signals greater than k 0 − n.

Common Learning

... has i.i.d. signals. It could be interpreted as a repeated Rubinstein email game. ..... Now restrict the domain even further so 2's predictions lie in 1's set. φθ.

3MB Sizes 1 Downloads 198 Views

Recommend Documents

Common Learning
Aug 22, 2006 - ria of these games, players typically learn over time about some unknown parame- ter. Examples include reputation models such as Cripps, Mailath, and Samuelson. (forthcoming), where one player ..... θ being the true parameter. STEP 2:

Common Learning with Intertemporal Dependence
Sep 30, 2011 - The signal 0 is a public signal that reveals the hidden state ¯x: either both agents observe it or neither do, and it is never observed in a state other than ¯x. Given that the signal 0 is public, it is without loss of generality to

Learning Preferences with Hidden Common Cause ...
approach to learn preferences from relational data based on Gaussian processes. Specifically, we employ the concept of ... lurk in relational graphs, and the hidden common causes are important factors to influence the preference degrees of ...... “

PDF CUPS: Common UNIX Printing System: Common ...
CUPS: Common UNIX Printing System is first on the scene with CUPS ... author's picking apart of IPP transactions and the CUPS API for programmers writing ...

Greatest Common Factor/Greatest Common Factor
Mr Mohit Paul, Mr Kunal Bahri and Ms Astha Nigam ... Greatest Common Factor/Greatest Common Factor - (20 Multiple Choices Questions) Quiz Assignment.pdf.

Common Cause.pdf
another raid by the Income Tax Department on the very. next day. The raid by the C.B.I. reportedly led to. recovery of incriminating documents and unaccounted.

Emailing- Common Phrases - UsingEnglish.com
I look forward to your quick reply. I look forward to hearing from you soon./ I expect to hear from you soon./ I am waiting for your reply./ Please get back to me as soon as you can./ Please reply asap. Thank you (in advance)./ Thank you for your coo

Common Sense
at first, when their number was small, their habitations near, and the public ... state of a king shuts him from the world, yet the business of a king requires him to know ... of a machine are put in motion by one, it only remains to know which power

Greatest Common Factor/Greatest Common Factor
Greatest Common Factor/Greatest Common Factor - (20 Multiple Choices Questions) Quiz Assignment.pdf. Greatest Common Factor/Greatest Common Factor ...

Common Nouns.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

Common Application.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Common ...

COMMON MERIT ...
Page 1 of 10. All india. NEET-UG. 2017 Rank. Common. Rank. MP. State. Rank. NEET Roll. No. Candidate Name Father's Name Gender MP. Domicile. Category Eligible. Category. Percentile Score. (out of. 720). 25297 1 0 901903014. ASHNEET KAUR. ANAND. KAVIN

Emailing- Common Phrases - UsingEnglish.com
Dear John, Hi/ Hi John. Hello John/ Hi John. My name is Alex Case and I…/ This is Alex Case. How are you?/ How's it going? (= How are things?) How are you?/