Take the sum of the numbers of the lowest black hat on everyone else’s head, call it q. If possible, pick the number x on your head to make q + x = 2n (this is the most likely value of the sum of the number of the lowest black hat over all prisoners). I haven’t gotten a nice analytic form for the large n behaviour, but it matches very closely with 1/(5 n^(1/2)) when plotting in mathematica.

]]>Idea 1: We do not lose generality by assuming that the strategy is non random.

Idea 2: Instead of putting red/blue hats on their heads, we can have him put hats labelled 0,1,2,3 on their heads with equal probability, and each person has to guess 0, 3 or (either 1 or 2).

Idea 3: The set of labellings whereby all the chumps guess correctly has the property that it is a subset of [4]^n such that for any two elements that differ in exactly one location, the two elements are labelled 1 and 2 in that location. Let us call such a labelling special.

Idea 4: Given a special labelling, the chumps can assume they are labelled specially, and they will all be correct whenever they are. Therefore the optimal probability is equal to (1/4)^n times the size of the largest special labelling. Let F(n) denote the size of the largest special labelling.

Idea 5: Given a special labelling X for n chumps, let X_0, X_1, X_2, X_3 denote the labellings possible for chumps 1, 2, …, n-1 given that chump n is labelled 0, 1, 2 or 3. Then X_0, X_1, X_2 and X_3 are special labellings for n-1 chumps, and they are all disjoint, except possibly X_1 and X_2.

Idea 6: If the larger of X_1 and X_2 has k elements, then X_0 and X_3 can have between them at most 4^(n-1)-k elements, so the size of X is at most (4^(n-1)-k)+k+k=4^(n-1)+k. Thus, since k <= F(n-1), the size of X is at most 4^(n-1)+F(n-1). Since this holds for all special labellings X for n chumps, F(n) <= 4^(n-1) + F(n-1).

Idea 7: Thus, since F(1) = 2, F(n) <= 4^(n-1) + … + 4 + 2 = (4^n + 2) / 3. So the optimal probability is at most 1/3 + 2/(3 x 4^n)

Finally, I show that the strategy of assuming the number of blue hats is equivalent to the number of people (mod 3) achieves the upper bound from idea 7.

Idea 8: Let P(x) = sum_{i=-n}^{i=n}P(n+i blue hats)x^i. Then by Binomial Thm, P(x)=(1+x)^{2n}/(4x)^n.

Idea 9: If v and w are the nontrivial cube roots of unity, then for all integers i, 1^i + v^i + w^i = 3 if i is divisible by 3, and 0 otherwise. Thus P(number of hats is equivalent to n mod 3) is (P(1) + P(v) + P(w)) / 3.

Idea 10: P(1) = 1 and for x=v,w, 1+x is -x^2, so (1+x)^(2n)=x^(4n)=x^n, so P(x)=1/4^n. Thus the probability is equal to 1/3+2/(3*4^n), which by idea 7 is optimal

]]>1. For n = 2 or n = 3, no choice of c (the modulus) or target sum s (which is n-1 in L Levine’s post above) yields a result better than the basic 1/(n+1) strategy.

2. For n = 4: using c = 3 and s = 1 yields a prob. of 101/512, a little greater than 1/5.

And here’s a surprise: Suppose the n wise men agree on an integer i chosen from 0,1,…,n-1, and agree to choose the first index j so that, among the n-1 hats they see, there are exactly i BLACKs. So the basic strategy has i = n-1. It appears that this strategy succeeds with prob. 1/(n+1), the same as the all-black strategy. I learned this from simulations, and some small cases, but I do not have a proof. There must be a simple proof.

Finally, there is a simple recursion that allows one to compute F(n,c,s) — the NUMBER of good hat assignments for which the best-known strategy succeeds, as a rational number for any n, c, s. One just recurses on n. It is:

F[n, c, s] = Sum[ 2^(c-k) F[n-1, c, Mod[s-k,s]] , {k, 1, c}]

F[1, c, 0] = 1

F[1, c, s] = 2^c-s

The probability of course is just F / 2^-(n c).

AND: I used the recursion to find the best c and s for n up to 5000. The evidence does not support the view that a target sum s = n -1 is best. I see no pattern in the best value of s.

]]>That way probability is always > 1/3,and goes to 1/3 as n goes to infinity ]]>

I am with James when there are only 2 wise men. I don’t think they can do any better than for each to guess the number of the other wise man’s lowest blue hat. But as for how to extend this strategy to multiple wise men, I have the following variation.

As before, let A_i be the number of the lowest hat on wise man i’s head which is blue. But rather than numbering the hats starting from 1, number the hats starting from 0. For example, say the 5th wise man has two red hats on the bottom, followed by a blue hat. Then A_5=2, not 3.

Now, rather than letting S_n be the sum of the A_i’s, let S_n be the NIM-SUM of the A_i’s. That is, write all the A_i’s in binary, and add without carrying. All the wise men make their guesses under the assumption that S_n=0.

This is identical to James’s strategy for the case of 2 wise men. But I’m not yet sure how it does for multiple wise men; summing the probabilities seems to be a tough task. I needed Mathematica to get a probability of 0.276205 for 3 wise men, and that’s only marginally better than Bill’s 1/(n+1) strategy. For all I know, my strategy might be worse than Bill’s in the long run.

I’ll keep thinking about this.

]]>Now I’ve told you everything I know.

]]>Namely, let A_i be the number of the LOWEST hat on the head of wise man i which is blue. Let S_n be A_1 + A_2 + … + A_n. Fix some constant c_n.

Each wise man will guess on the basis of the following:

(i) S_n is a multiple of c_n.

(ii) All the A_i are at most c_n.

For man i, who can see all the other A_j but not A_i itself, there is at most one value of A_i which is consistent with (i) and (ii). If there is one, then man i writes down this number. When (i) and (ii) both hold, each man has written down a number corresponding to a blue hat on his head.

The A_i are i.i.d. geometric with parameter 1/2. If c_n – log_2 n goes to infinity as n goes to infinity, but not too fast, then the probability of event (ii) goes to 1 as n goes to infinity, and the probability of event (i) is approximately 1/c_n, and the two are approximately independent. So they succeed with probability around 1/c_n. (All this can be made more precise).

So, this gives a success probability on the order of 1/(log_2 n). However, it’s actually succeeding at a tougher task (name the LOWEST blue hat on your head, rather than naming ANY blue hat on your head). So far, I haven’t seen how to take advantage of the easier task….

For n=2, the best I can see is that man 1 guesses A_2 and man 2 guesses A_1. Then they succeed whenever A_1=A_2, which has probability (1/2)^2 + (1/4)^2 + (1/8)^2 + … = 1/3. Again, it actually succeeds in the tougher task of finding the lowest blue hat.

]]>Consider the following variation of Tanya’s game. We have n wizards, and each gets one of the numbers {0, 1, 2} pasted on his or her forehead, with equal probability. The goal again is for every wizard to state the number pasted on his or her own head. In this case, it’s easy to see that 1/3 is an upper bound on the probability of success: Wizard 1 can only be right 1/3 of the time, so all wizards can only be right 1/3 of the time at most. And the strategy “guess that the total sum is divisible by 3” achieves this bound exactly. So, in the problem we’re actually given, any ability to beat 1/3 comes entirely from the lack of uniform distribution. (And this makes some intuitive sense: anything that causes some concentration helps our poor wizards.) Similarly, this gives us a hard upper bound of 1/2 for the wizards’ probability of winning under any strategy.

On the other hand, it’s not clear to me whether the fact that we have the hats on each wizard’s head in some particular order matters; I conjecture that the optimal solution in this puzzle is the same as if each wizard has one of the numbers {0, 1, 2} pasted on his or her forehead, where 0 and 2 appear with probability 1/4 and 1 appears with probability 1/2. Is there an easy proof of this?

]]>