TARK Conference 2013 Conference Paper
- Eric Pacuit
- Arthur Paul Pedersen
- Jan-Willem Romeijn
sequence of input beliefs [8, 9, 5, 15, 16, 18, 21, 6]. Two postulates which have been extensively discussed in the literature are the following constraints: In this extended abstract, we carefully examine a purported counterexample to a postulate of iterated belief revision. We suggest that the example is better seen as a failure to apply the theory of belief revision in sufficient detail. The main contribution is conceptual aiming at the literature on the philosophical foundations of the AGM theory of belief revision [1]. Our discussion is centered around the observation that it is often unclear whether a specific example is a “genuine” counterexample to an abstract theory or a misapplication of that theory to a concrete case. 1. I1 If ψ ∈ Cn({ϕ}) then (K ∗ ψ) ∗ ϕ = K ∗ ϕ I2 If ¬ψ ∈ Cn({ϕ}) then (K ∗ ϕ) ∗ ψ = K ∗ ψ Each of these postulates have some intuitive appeal. Postulate I1 demands if ϕ → ψ is a theorem (with respect to the background theory), then first learning ψ followed by the more specific information ϕ is equivalent to directly learning the more specific information ϕ. Postulate I2 demands that first learning ϕ followed by learning a piece of information ψ incompatible with ϕ is the same as simply learning ψ outright. So, for example, first learning ϕ and then ¬ϕ should result in the same belief state as directly learning ¬ϕ. 2 Many recent developments in this area have been offered on the basis of analyses of concrete examples. These range from toy examples—such as the infamous muddy children puzzle, the Monty Hall problem, and the Judy Benjamin problem—to everyday examples of social interaction. Different frameworks are then judged, in part, on how well they conform to the analyst’s intuitions about the perceived relevant set of examples. This raises an important issue: Implicit assumptions about what the agents know and believe about the situation being modeled often guide the analyst’s intuitions. In many cases, it is crucial to make these underlying assumptions explicit. The following simple example illustrates the type of implicit assumption that we have in mind. There are two opaque boxes, labeled 1 and 2, each containing a coin. The believer is interested in the status of the coins in each box. Suppose that Ann is an expert on the status (heads up or tails up) of the coin in box 1 and that Bob is an expert on the status (heads up or tails up) of the coin in box 2. Currently the believer under consideration does not have an opinion about whether the coins are lying heads up or tails up in the boxes; more specifically, the believer thinks that all four possibilities are equally plausible. Suppose that both Ann and Bob report that their respective coins are lying tails