The Update Project

The goal of the Update Project is to help decisionmakers improve the accuracy of their models, especially as they relate to strategies for having a large positive impact on the world. What this means in practice is that I assemble small groups of thinkers across fields like tech, academia, the nonprofit world, or government, whose models of some topic differ. (See my open questions page for some of the disagreements that have emerged so far.)

UP is based on a few core values:

Taking ideas seriously. Typically, conversations about ideas feel kind of like recreational diversions: we enjoy batting around interesting thoughts and saying smart things, and then we go back to doing whatever we were already doing in our lives. Which is a fine thing to do — but at least sometimes, I think we should be asking ourselves questions like: “How could I tell if this idea were true? If it is true, what does imply I should be doing differently in my life? What else does it imply I’m wrong about?” And, zooming out: “Where are my blind spots? Which important questions should I be thinking about that I’m not? Which people should I be talking to?” In other words, taking ideas seriously means treating your worldview as something that affects outcomes in the world you care about — and therefore, wanting to make your worldview as full and accurate as possible.

Disagreements are interesting. When thoughtful people with access to the same information reach very different conclusions from each other, we should be curious about why. I think we tend to be incurious about this simply because it’s so common that we’re used to it. But if, for example, a medical community is divided on whether Treatment A or B does a better job of curing some disease, they should want to get to the bottom of that disagreement, because the right answer matters — lives are at stake. And I claim the same is true, if  less directly, for open questions like these.

Strong opinions, weakly held. One common failure mode I see in smart people is that they abstain from trying to have opinions about things because “I’m not an expert” or “It’s hard to know for sure.” Instead, I think we should be bold enough to venture guesses, expressed clearly enough such that it’s easy for someone else, or the universe, to prove us wrong. In the long run, I think this policy leads to much better models — and better thinkers — than the policy of trying to minimize our error in the short run.

…Other than those meta values, about how people should approach problems, the Update Project is pretty much ideologically neutral. Of course I personally have opinions about object-level things, but in my capacity running UP I’m not trying to promote any particular object-level views. I’m just banking on the assumption that if smart people follow good processes for evaluating ideas together, the net expected result is more accurate views.

Why the name? In my circles, an “update” refers to a revision to one’s model. (Technically the reference is to a Bayesian update, but the way we use the term colloquially doesn’t usually mean something quite so precise.) For example, you might say, “I’ve updated on the fact that…” or “Okay, I’m updating in favor of the idea that…”

I like the word because of how matter-of-fact it is. “Changing one’s mind” feels so weighty and dramatic, but updating is just this workaday thing we do — or should do — all the time.

My time and expenses for this project are covered by a contract with the Open Philanthropy Project.