Not as interesting as I thought

As often happens, after writing yesterday’s post, it stuck around in the back of my mind, and I think I might be wrong about it.

Briefly, the background is that a friend was looking at lopsided results from a supposedly-random survey, and wondering whether they were really random. People warned her to be careful about her judgment because unlikely things are likely to happen. My conclusion was that we should estimate whether the survey was biased using Bayes’ formula, and I didn’t see a  place there for the “unlikely things are likely to happen” heuristic. I wrote

But “human biases” doesn’t seem to have any obvious spot in Bayes’ formula. The calculation gives a probability that doesn’t have anything to do with your biases except insofar as they affect your priors. Who cares whether the program has been used hundreds or thousands of times before? We’re only interested in this instance of it, and we don’t have any data on those hundreds or thousands of times.

and went on to say

In the end, the “unlikely events are likely to occur” argument doesn’t seem relevant here. If we looked at a large pool of surveys, found one with lopsided results, and said, “Aha! Look how lopsided these are! Must be something wrong with the survey process!” that would be an error, because by picking one special survey out of thousands based on what its data says, we’ve changed P(data|random). That is, it is likely that the most-extreme result of a fair survey process looks unfair. But we didn’t do that here, so why all the admonitions?

But since then, I’m unsure about my statement, “the most-extreme result of a fair survey process looks unfair. But we didn’t do that here”. It’s true that there is only one survey in question, and we aren’t picking the most extreme survey out of many. However, we are picking one particular thing that happened to someone out of many. And this can, I think, come into Bayes’ formula.

A good Bayesian must use every bit of available evidence in their calculations. The survey results are one piece of evidence, but another is the fact that we decided to analyze this survey. If the survey had come up with a fair-looking split, like 80 – 92 – 88, no one would have stopped to think about it. But we did stop to think about it. So Bayes’ formula should not be

P(random|data) = \frac{P(data | random)P(random)}{P(data)}

but

P(random|data + special\_notice) = \frac{P(data + special\_notice|random)P(random)}{P(data+special\_notice)}

If you have a large pile of surveys and you pick one up at random intended to see whether it looks fair, then notice that its results look wonky, you are perfectly justified in saying there might be something wrong with the survey process. If you have a large pile of surveys and you root through them until you find one that’s wonky, you aren’t. The key isn’t the number of surveys that exist, since that’s the same in both cases. It’s that in the second case, the fact that you took notice of the survey changes the likelihood of seeing skewed results, and this must be taken into account. So at least preliminarily, until I change my mind again or get some more-expert feedback, I eat my words on what I said yesterday.

Advertisements

One Response to “Not as interesting as I thought”

  1. Paul Murray Says:

    So the question becomes: given the amount of surveys I have looked through, what’s the chance of finding one this wonky?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: