Today I understood the wise men puzzle at a conceptual level, well enough that I could explain it and possibly generalize to similar domains. This post is my attempt at explaining it.

The puzzle is described in (Huth & Ryan, 2000) as follows:

There are three wise men. It’s common knowledge—known by everyone and known to be known by everyone, etc.—that there are three red hats and two white hats. The king puts a hat on each of the wise men in such a way that they are not able to see their own hat, and asks each one in turn whether they are not able to see their own hat, and asks each one in turn whether they know the color of the hat on their head. Suppose the first man says he does not know; then the second says he does not know either. It follows that the third man must be able to say that he knows the colour of his hat. Why is this? What colour has the third man’s hat?

Let’s call the people Alpha, Beta, Gamma, in the order they speak.

One solution is to think about the puzzle in terms of possible worlds. A world in this problem is described by an assignment of hat colors to people, which is equally an ordered triple of colours , with . There are only 2 white hats, so in the beginning, the seven possible worlds are

If Beta and Gamma were both wearing white hats, then Alpha would know that that his hat is red. Therefore, when Alpha says “no”, Beta and Gamma both learn that both of them cannot be white, i.e. at least one of them is red. The remaining possible worlds are

Now, *we* know that the world is one of the 6 worlds above, but Beta also sees the hats of Alpha and Gamma. What we think as outsiders only matters for whether *we* can tell who’s wearing what.
But back to the observations of A,B,C. When Beta says “no”, that rules out the worlds where Gamma is white (because then Beta would be red).

This means that Gamma is red, and he also knows this.

# Another way

Our solution is more procedural than is necessary, and it does not show the essence of omniscient agents acting with one another. As this problem is small enough, we could list for every world every statement any agent could make, which is simply their knowledge base of true statements (i.e. whatever they can deduce from their view and from the common knowledge, CK). (Say, with atoms , meaning “I think person has color X”, with a abbreviating .) We can only do this because we are not interested in making statements like “X knows that Y knows that Z knows that φ”. Besides, in every world, we implicitly include what is common knowledge, and what any agent can see, i.e. the whole problem statement in the opening paragraph. The common knowledge at the beginning in any of these worlds is . That’s not very much, but at least symmetric, which allows us to write down only three worlds.

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma: , .

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma: ,

**World** , CK: :

- Alpha:
- Beta: ,
- Gamma: ,

When Alpha says “no” in the beginning, that means he is not in a world where from his knowledge base he can conclude his own colour. His statement becomes common knowledge (CK), i.e. CK is extended with .

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma: ,

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma: ,

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma: ,

**World** , CK: :

- Alpha:
- Beta: ,
- Gamma: ,

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma: ,

**World** , CK: :

- Alpha: ,
- Beta:
- Gamma: ,

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma:

We were able to cross out some worlds! And in the world we were left with zero possible worlds for Alpha, i.e. Alpha’s statement would lead to a contradiction: he would have answered “yes”. In fact, this was how we eliminated possible-worlds in the previous solution. Next turn: the king asks Beta, who says “no”. The common knowledge is extended with . (Right? At this point I can imagine myself making an incorrect deduction.)

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma: ,

**World** , CK: :

- Alpha: ,
- Beta:
- Gamma: ,

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma:

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma: ,

**World** , CK: :

- Alpha: ,
- Beta:
- Gamma: ,

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma:

Another world disappeared. But what about , why is it still there, when last time we argued that it’s not possible for Gamma to be white? In fact, it is not: in that world Beta would have said yes, as he knew what colour he had. Although never explicitly stated, we assumed that if someone’s not then he’s white, and vice versa. Use to denote this fact:

We also know that common knowledge is true: for every formula , it’s an axiom that . Then, it’s simple to show that Alpha is red and Gamma is white, Beta is red.

Click this to fix that above: click me! (Needs JavaScript.)

Now we are left with the following worlds:

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma: ,

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma:

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma:

**World** , CK: :

- Alpha: ,
- Beta: ,
- Gamma:

At first sight, Gamma’s knowledge base in some worlds () contains a world with . But every four of the above worlds has , meaning is deducible from and the CK, making in world impossible. Click me to fix that. This means is CK. Yay!

Note: there might be some other true statements that could be deduced, so maybe Alpha knows his colour too in some worlds—I haven’t solved the problem in full. For example, when Gamma answers “yes” in the end, it doesn’t say anything we didn’t already know, and nothing that Alpha and Beta didn’t know already, as can be deduced from the common knowledge. Maybe someone else knows theirs too?

# Another problem

A slight modification is to map a natural number to worlds where X is able to decide their colour after utterances, if it wasn’t X who spoke last.

Related: it feels like there is a situation with people, where two agents can keep on discarding possible worlds just by them speaking in turns. If you know of one such problem, please let me know.

# Notes

I hope I didn’t make a mistake in the calculations, I admit I enumerated the possible worlds by hand instead of with Prolog.

# Conclusion

Listen to people when they say “no”.

# References

- Huth, M., & Ryan, M. D.(2000).
*Logic in Computer Science - modelling and reasoning about systems*. Cambridge University Press.