Honest truth-seeking agents with common priors (with common knowledge of each other's beliefs) should not knowingly disagree
Why this is relevant? From one side this seems – duh, this is obvious, if to agents have the same info, follow the same values and logic they come to the same conclusions. From the other – after most debates, the agreement almost never happens.
People line of updating should be dynamic and unpredictable. A sign of weak bayesian updating skills is when a person updates gradually, step-by-step, if they seem to be anchored to some positions. For example a predictable political conversation political positions happens people would gradually
The theorem is most intriguing when two agents have the same initial beliefs (priors) but then observe different pieces of evidence. Even in this case, once they share and acknowledge each other's evidence, they should eventually come to the same conclusion.
The recursive depth of common knowledge ensures that both agents fully appreciate the implications of each other's evidence. Imagine two people looking at a sealed box which might contain a diamond. Both agree beforehand on a 50% probability of the diamond being in the box. Now, A is given a reliable tip that there is no diamond. B, meanwhile, is given a hint there is a diamond. They inform each other of these tips. Just knowing the other's tip isn't enough. A needs to know how B updated his belief based on the hint and vice versa. If B still believes there's a high chance of a diamond despite A's reliable tip, maybe A starts to doubt the reliability of her tip (thinking, "Why does B still believe in the diamond despite my tip? Does he know something I don't?").
GPT: The statement "Honest truth-seeking agents with common priors should not knowingly disagree" is rooted in the idea of the Aumann's Agreement Theorem. The theorem, named after Robert Aumann, states that two individuals acting rationally (in a certain precise sense) and with common knowledge of each other's beliefs cannot agree to disagree.
The essence of the theorem is this: If two agents have the same prior beliefs and they update their beliefs using Bayes' rule (a fundamental concept in probability theory and statistics), then after observing the same public evidence, they should have the same posterior beliefs. If they know about each other's evidence and they are honest about their beliefs, then they cannot knowingly hold different views.
Let's break it down a little:
- Honest truth-seeking agents: These are rational agents that genuinely want to know the truth. They are not influenced by emotions, biases, or external pressures in forming their beliefs.
- With common priors: This refers to the initial set of beliefs or probabilities that the agents hold about an event or outcome before receiving any new evidence.
- Should not knowingly disagree: If both agents are aware of all the evidence and know each other's beliefs, they should not continue to hold different beliefs.
The Aumann's Agreement Theorem has profound implications for epistemology and the theory of rational choice. In practice, however, humans are not always perfect Bayesian updaters and have many cognitive biases that might prevent them from reaching an agreement, even with common priors and evidence.
Me: What's interesting about this theory, if two people would have the same knowledge, the same experiences it's obvious they would agree?
GPT: Indeed, Aumann's Agreement Theorem may initially seem intuitively obvious: if two people have the same knowledge and experiences, why wouldn't they agree? However, the significance and interest of the theorem don't lie in this simplistic scenario. Instead, it's about the process of updating beliefs and the implications of common knowledge.
A few points to consider:
- Common Priors, Different Evidence: The theorem is most intriguing when two agents have the same initial beliefs (priors) but then observe different pieces of evidence. Even in this case, once they share and acknowledge each other's evidence, they should eventually come to the same conclusion. This might not be immediately obvious without the theorem.
- The Concept of Common Knowledge: The theorem involves not just shared knowledge, but common knowledge. It's not enough for A to know X and for B to know X. A must also know that B knows X, B must know that A knows X, A must know that B knows that A knows X, and so on. This recursive depth to common knowledge makes the theorem non-trivial.
- Real-world Implications: Disagreements persist everywhere, from scientific debates to market predictions. This theorem challenges the notion of "agreeing to disagree." If two rational agents have common knowledge of each other's beliefs and still disagree, at least one of them must not be updating their beliefs correctly.
- Human Irrationality: Humans often don't update their beliefs rationally. Cognitive biases, emotions, and other factors play a role. The theorem provides a theoretical benchmark against which real-world behavior can be compared, highlighting our deviations from ideal rationality.
- Broader Implications: The theorem has implications for fields beyond just epistemology. For instance, in economics, the idea that market participants can't "agree to disagree" about the value of an asset has implications for theories about market efficiency.
While the theorem's conclusion might seem straightforward at first glance, its assumptions, implications, and the nuances of its application are what make it a topic of interest and debate among scholars.
Me: On the point #2 "The Concept of Common Knowledge: The theorem involves not just shared knowledge, but common knowledge. It's not enough for A to know X and for B to know X. A must also know that B knows X, B must know that A knows X, A must know that B knows that A knows X, and so on. This recursive depth to common knowledge makes the theorem non-trivial."
Why it needs to be this depth of common knowledge. Presumably if A and B wouldn't know each other they should come to same agreements given the same knowledge. Right?
GPT: You've touched on a subtlety that's fundamental to understanding Aumann's Agreement Theorem.
At first glance, it might seem that if two agents, A and B, have the same evidence, they should indeed come to the same conclusions, regardless of whether they are aware of each other's beliefs. However, the depth of common knowledge is crucial for the theorem in situations where A and B have access to different pieces of evidence.
Let's work through this:
- Different Evidence: Suppose A and B start with the same prior beliefs, but then each observes a different piece of evidence. They update their beliefs based on their individual evidence. At this point, they might disagree.
- Sharing Evidence Isn't Enough: Now, suppose A tells B about her evidence, and B tells A about his evidence. They now have the same evidence, right? But simply sharing evidence might not be enough. Why? Because while A knows B's evidence, A doesn't necessarily know how much weight B gives to that evidence or how B interprets that evidence in the light of other information he might have.
- Depth of Common Knowledge: The recursive depth of common knowledge ensures that both agents fully appreciate the implications of each other's evidence. It's not just about A knowing what B knows, but also about A knowing what B thinks about what A knows, and vice versa. It ensures that both agents have taken all pieces of evidence and interpretations into account, and thus they can't "agree to disagree."
- An Illustrative Example: Imagine two people looking at a sealed box which might contain a diamond. Both agree beforehand on a 50% probability of the diamond being in the box. Now, A is given a reliable tip that there is no diamond. B, meanwhile, is given a hint there is a diamond. They inform each other of these tips. Just knowing the other's tip isn't enough. A needs to know how B updated his belief based on the hint and vice versa. If B still believes there's a high chance of a diamond despite A's reliable tip, maybe A starts to doubt the reliability of her tip (thinking, "Why does B still believe in the diamond despite my tip? Does he know something I don't?").
In essence, the depth of common knowledge ensures that both agents fully and recursively account for the beliefs, interpretations, and updates of the other. If they both do this rationally and still disagree, then they violate the theorem's conditions. In practice, humans often don't achieve this depth of mutual understanding, which is why disagreements persist.
I think this mostly apply to the scenario of similar priors (people in practice will never have the same same priors). That’s why it is critical to see how people update given new evidence.