The question is about a modification of the following riddle (you can think about it before reading the answer if you like riddles, but that's not the point of my question):

The Riddle: We assume there is an infinite sequence of boxes, numbered $0,1,2,\dots$. Each box contains a real number. No hypothesis is made on how the real numbers are chosen. You are a team of 100 mathematicians, and the challenge is the following: each mathematician can open as many boxes as he wants, even infinitely many, but then he has to guess the content of a box he has not opened. Then all boxes are closed, and the next mathematician can play. There is no communication between mathematicians after the game has started, but they can agree on a strategy beforehand.

You have to devise a strategy such that at most one mathematician fails. Axiom of choice is allowed.

The Anwser: If $\vec u=(u_n)_{n\in\mathbb N}$ and $\vec v=(v_n)_{n\in\mathbb N}$ are sequences of real numbers, we say that $\vec u\approx \vec v$ if there is $M$ such that for all $n\geq M$, $u_n=v_n$. Then $\approx$ is an equivalence relation, and we can use the axiom of choice to choose one representant by equivalence class. The strategy is the following: mathematicians are numbered from $0$ to $99$, and the sequence of boxes $(u_n)_{n\in\mathbb N}$ is split into $100$ sequences of the form $\vec u_i=(u_{100n+i})_{n\in\mathbb N}$ with $0\leq i\leq 99$. Mathematician number $i$ will look at all sequences $\vec u_j$ with $j\neq i$, and for each sequence, it will compute the index $M_j$ from which the sequence matches the representant of its $\approx$-class. He then takes $M$ to be the maximum of the $M_j+1$ and looks at the sequence $\vec u_i$ starting at this $M$. He can deduce the $\approx$-class of the sequence $\vec u_i$, and guesses that $u_{M-1}$ matches the representant. At most one mathematician will be wrong: the one who has the number $i$ with $M_i$ maximal.

The Modification: I would find the riddle even more puzzling if instead of 100 mathematicians, there was just one, who has to open the boxes he wants and then guess the content of a closed box. He can choose randomly a number $i$ between $0$ and $99$, and play the role of mathematician number $i$. In fact, he can first choose any bound $N$ instead of $100$, and then play the game, with only probability $1/N$ to be wrong. In this context, does it make sense to say "guess the content of a box with arbitrarily high probability"? I think it is ok, because the only probability measure we need is uniform probability on $\{0,1,\dots,N-1\}$, but other people argue it's not ok, because we would need to define a measure on sequences, and moreover axiom of choice messes everything up.

share|cite|improve this question
    
It is because each sequence $\vec u_i$ matches its representant $\vec v_i$ starting in $M_i$. Therefore, if a mathematician starts looking at $\vec u_i$ from position $M> M_i$, he will be able to guess what is $(u_i)_{M-1}$, since it is equal to $(v_i)_{M-1}$, and since he knows $\vec v_i$. – Denis Dec 9 '13 at 16:35
1  
I really like this version of the riddle! It is already surprising when using only 2, rather than 100. – Joel David Hamkins Dec 9 '13 at 16:36
1  
Yes I find it's one of the most surprising use of the axiom of choice :) – Denis Dec 9 '13 at 16:39
    
Wow, this is indeed amazing! Cannot the same be achieved (with a different strategy) for infinitely many mathematicians? – domotorp Dec 9 '13 at 16:48
2  
This is beyond mind-boggling! What is the source of this riddle? Did you come up with it? I'll probably want to write about it at some point. The version with infinitely many people (where all but finitely many guess correctly) is described on Michael O'Connor's blog, xorshammer.com/2008/08/23/set-theory-and-weather-prediction and in C.S. Hardin and A.D. Taylor, "A Peculiar Connection Between the Axiom of Choice and Predicting the Future", Am. Math. Monthly 115, (February 2008), 91-96, where it is attributed to Yuval Gabay and Michael O'Connor. – Johan Wästlund Dec 17 '13 at 12:47
up vote 11 down vote accepted

The probabilistic reasoning depends on a conglomerability assumption, namely that given a fixed sequence $\vec u$, the probability of guessing correctly is $(n-1)/n$, then for a randomly selected sequence, the probability of guessing correctly is $(n-1)/n$. But we have no reason to think the event of guessing correctly is measurable with respect to the probability measure induced by the random choice of sequence and index $i$, and we have no reason to think that the conglomerability assumption is appropriate.

A quick way to see that the conglomerability assumption is going to be dubious is to consider the analogy of the Brown-Freiling argument against the Continuum Hypothesis (see here for a discussion). Assume CH. Let $\prec$ be a well-order of $[0,1]$. Suppose $X$ and $Y$ are i.i.d. uniformly distributed on $[0,1]$. Consider the question of which variable is bigger. Fix a value $y\in [0,1]$ for $Y$. Then $P(X\preceq y)=0$, since there are only countably many points $\prec$-prior to $y$. By a conglomerability assumption, we could then conclude that $P(X\le Y)=0$, which would be absurd as the same reasoning would also show that $P(Y\le X)=0$. The argument fallaciously assumes conglomerability. We are neither justified in concluding that $P(X\le Y)=0$, nor that $\{ X \le Y \}$ is measurable (though for each fixed $y$, $\{ X \le y \}$ is measurable). And indeed it's not measurable: for were it measurable, we could use Fubini to conclude that it has null probability. Note that one can repeat the argument without CH but instead using an extension of Lebesgue measure that assigns null probability to every subset of cardinality $<\mathfrak c$, so clearly there is no refutation of CH here.

Here's another analogy: By the Hausdorff Paradox, decompose $S^2-D$ for a countable $D$ into disjoint $A_1,A_2,A_3$, where $A_1,A_2,A_3,A_1\cup A_2$ are all isometrically equivalent. Suppose a point $X$ is uniformly chosen in $S^2$. (I will ignore $D$ from now on, since it has zero measure. If you like, you can assume that the uniform choice is done so $X$ cannot lie in $D$.) Let $i$ be a random index in $\{1,2,3\}$, independent of $X$. How likely is it that the $X \in A_i$? It's very tempting to say that it's got to be $1/3$. Any fixed value of $X$ in $S^2$ is in one of the three subsets, after all, and so if we choose a random index $i$, surely we have probability $1/3$ that it'll be in $A_i$. Of course this easily leads to paradox. But we are not in fact entitled to assume that $J = \{ \omega : X(\omega) \in A_{i(\omega)} \}$ is measurable. (In fact, it's easy to see that it's not measurable.)

Let's go back to the riddle. Suppose $\vec u$ is chosen randomly. The most natural option is that it is a nontrivial i.i.d. sequence $(u_k)$, independent of the random index $i$ which is uniformly distributed over $[100]=\{0,...,99\}$. In general, $M_j$ will be nonmeasurable (one can prove this in at least some cases). We likewise have no reason to think that $M$ is measurable. But without measurability, we can't make sense of talk of the probability that the guess will be correct.

Here's an amusing thing that may help see how measurability enters into these things. Consider a single sequence of infinitely many independent fair coin flips. Our state space is $\Omega=\{0,1\}^{\mathbb N}$, corresponding to an infinite sequence $(X_i)_{i=0}^\infty$ of i.i.d.r.v.s with $P(X_i=1)=P(X_i=0)=1/2$. Start with $P$ being the completion of the natural product measure on $\Omega$.

Can you guess the first coin flip on the basis of all the others? You might think: "Of course not! No matter what function from the values of flips $X_1,X_2,...$ to $\{0,1\}$ is chosen, the probability that the value of the function equals $X_0$ is going to be $1/2$."

That's a fine argument assuming the function is measurable. But what if it's not? Here is a strategy: Check if $X_1,X_2,...$ fit with the relevant representative. If so, then guess according to the representative. If not, then guess $\pi$. (Yes, I realize that $\pi\notin\{0,1\}$.) Intuitively this seems a really dumb strategy. After all, we're surely unlikely to luck out and get $X_1,X_2,...$ to fit with the representative, and even if they do, the chance that $X_0$ will match it, given the rest of the sequence, seems to be only $1/2$.

But if you choose shift-invariant representatives (i.e., $r([\tau \vec u])=\tau r([\vec u])$ when $(\tau\vec u)_n = \omega$ is a left sequence shift--by Zorn, such a choice is possible), then the outer $P$-measure of the set of representatives is equal to $1$. So there is an extension $P'$ of $P$ such that $P'$-almost surely the dumb strategy works. Just let $P'$ be an extension on which the set of representatives has measure $1$ and note that the dumb strategy works on the set of representatives.

share|cite|improve this answer
    
ok this helps, but when you say "suppose $\vec u$ is chosen randomly",this is where the problem arises. Isn't there a way to ask the riddle so that the notion of probability of victory makes sense, without encountering this problem (for instance agains an opponent who chooses the sequence). – Denis Dec 12 '13 at 13:56
    
But isn't this going to depend on how the opponent chooses the sequence? As Joel Hamkins notes in another comment, if the opponent always chooses the same sequence, then there is a strategy that gets the right answer each time. :-) – Alexander Pruss Dec 12 '13 at 14:33
    
yes but the point is that we can win again any strategy of the opponent, even if he chooses the sequence after we chose our (probabilistic) strategy. This way we avoid talking about probabilities on sequences. – Denis Dec 12 '13 at 15:09
    
In the probabilistic variant, I don't see that you can win against any strategy of the opponent. If we are making no probabilistic assumptions whatsoever, then in particular we are not assuming that our choice of index $i$ is independent of the opponent's choice of numbers. – Alexander Pruss Dec 17 '13 at 14:47
    
Our choice of index $i$ is made randomly, but for this we only need the uniform distribution on $\{0,\dots,n\}$. It is made independently of the opponent's choice. – Denis Dec 17 '13 at 15:21

I also like this version of the riddle. To answer the actual question though, I would say that it is not possible to guess incorrectly with probability only $\frac{1}{N}$, even for $N=2$. In order for such a question to make sense, it is necessary to put a probability measure on the space of functions $f: \mathbb{N} \to \mathbb{R}$. Note that to execute your proposed strategy, we only need a uniform measure on $\{1, \dots, N\}$, but to make sense of the phrase it fails with probability at most $\frac{1}{N}$, we need a measure on the space of all outcomes. The answer will be different depending on what probability space is chosen of course.

Here's a concrete choice for a probability space that shows that your proposal will fail. Suppose that for each index $i$ we sample a real number $X_i$ from the normal distribution so that the $X_i$s are independent random variables. If there is only person, no matter which boxes they view, they gain no information about the un-opened boxes due to independence. Thus, their probability of guessing correctly is actually 0, not $\frac{N-1}{N}$, say.

If it were somehow possible to put a 'uniform' measure on the space of all outcomes, then indeed one could guess correctly with arbitrarily high precision, but such a measure doesn't exist.

share|cite|improve this answer
    
I'm not sure I agree. I think we can make sense of "fails with probability at most $1/N$", by saying that for all fixed sequence, the probability (which comes from the strategy) of failing is at most $1/N$. Moreover I don't understand your counter-example, because no matter how you choose the sequence, the strategy still has $\frac{N-1}{N}$ chance of guessing correctly. – Denis Dec 9 '13 at 17:41
4  
That "for [each] fixed sequence, the probability of failing is at most $1/N$" basically says something like that $P(F|S)=1/N$ for each sequence $S$. But you can't infer that $P(F)=1/N$ unless you've got a probability measure on the whole space conglomerable with respect to the partition induced by the $S$s. (I bet the probabilities are going to be at best finitely additive, and if we have merely finitely additive probabilities, we can have failures of conglomerability.) I am reminded of the Brown-Freiling argument against CH (mdpi.com/2073-8994/3/3/636). – Alexander Pruss Dec 9 '13 at 17:53
    
I don't get why we need a probability measure on the sequences. Why can't we say that "winning with probability at least $\frac{N-1}{N}$" means that no matter the sequence chosen by an adversary opponent, we will win with probability at least $\frac{N-1}{N}$? – Denis Dec 9 '13 at 19:21
    
I now think the i.i.d. normally distributed counterexample doesn't work. The problem is that you're looking at a "randomly" indexed variable (I guess something like $X_{100M + i}$, where $i$ is randomly distributed over $\{0,...,99\}$ and $M$ is chosen by the algorithm), but the "random" index may not be a measurable function ($i$ is measurable, but $M$ presumably won't be). – Alexander Pruss Dec 9 '13 at 19:24
    
@DK: Brown-Freiling argument: Assume CH. Let $\prec$ be a well-order of $[0,1]$. You and I are assigned i.i.d. uniform $X$ and $Y$ in $[0,1]$. One loses if one has the lower number. I say: No matter what you have, say $y_0$, $\{ x : x \prec y_0 \}$ is a countable set, so it has probability zero, and so almost surely I'll lose. But you conclude that almost surely you'll lose. This seems an analogue to your argument. If you think it disproves CH, work instead in ZFC with a translation-invariant extension of Lebesgue measure that assigns zero probability to all subsets of cardinality $<c$. – Alexander Pruss Dec 9 '13 at 19:35

The counter-intuitiveness of the axiom of choice is clouding the real issue here ( I think). If I have a secret number and you try to guess a larger one, what are your odds of success? is there a way to make them better (if you have $99$ friends). You have only finitely many ways to fail and infinitely many to succeed, but we can't really assign a probability. That is also behind this puzzle

Going back to the given problem, suppose that each box has a guard. At a certain time each guard sees the contents of every other box then guesses the contents of their own. Under AoC with an agreed set of equivalence class representatives and the familiar protocol we know that all but finitely many will be correct. That is strange but we are used to this consequence of the axiom of choice. If you have full knowledge of all this and are required to become the guard for a box of your choice (after looking into as many other boxes as you wish) can we say that you are certain to make a correct guess? Not really.

The situation in the question above seems clearer to me if we stipulate ahead of time that all but finitely many boxes contain $0$ (no assumption , other than finiteness on which the exceptions are). If there are $100$ potential guards who are friends, can they be sure that each can pick a box and guess $0$ (what else can they do?) with at least $99$ correct? yes, as before split the boxes into $100$ subsequences, person $k$ looks at all the boxes in every subsequence other than his own then chooses a box further than the last non-zero box in every other subsequence.

If you are one member of this set of $100$ friends are your odds of being correct (when you guess $0$) improved $100$ fold, or at all? Not really.

Essentially $100$ people are assigned hidden integers and must name an integer greater than their assigned integer (so only finitely ways to fail and infinitely many to succeed!). If each person can see all the other integers then only one will fail.

In other words, given AoC, we may assume a known set of coset representatives such that every real sequence is is (uniquely) the termwise sum of a coset representative and a sequence with only finitely many non-zero terms. Once we accept that (which is weird but familiar) there is no loss in saying that the chosen sequence is in a certain coset (so why not the coset of the zero sequence)?

share|cite|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.