[Original thread here: Tegmark’s Mathematical Universe Defeats Most Arguments For God’s Existence.]
1: Comments On Specific Technical Points
2: Comments From Bentham’s Bulldog’s Response
3: Comments On Philosophical Points, And Getting In Fights
Comments On Specific Technical Points
Nevin Climenhaga writes:
Tegmark's Mathematical Universe theory faces similar problems to more standard physical multiverse hypotheses as a response to the fine-tuning argument. First, it predicts that most observers would be "Boltzmann Brains".
It's not right that, as the post suggests, "a conscious observer inevitably finds themselves inside a mathematical object capable of hosting life." Although most mathematically possible universes have parameters that don't allow for complex life to evolve in the way we think it did in our universe, that doesn't mean there are no observers at all in those universes. Even in a universe at a state of thermal equilibrium (maximum entropy), there should be very infrequent chance fluctuations that lead to Boltzmann Brains: particles that have organized themselves into a functioning brain in a sea of chaos surrounding them. And while these fluctuations are very infrequent, since a fine-tuned universe is so unlikely, in the space of all possible universes, there are still vastly more Boltzmann Brain observers, most of whose experiences are a jumbled mess, than there are observers with highly ordered experiences as of a fine-tuned universe.
So if we are random observers in the space of all possible universes, it's vastly more likely that our experiences would be a jumbled mess than that they would be of the ordered kind we actually have. (How much more likely will depend on how we sort out the simplicity weighting, but I don't think any principled weighting will avoid this conclusion.)
On the plausible assumption that it's more likely that our experiences would be ordered if the universe was created by God, our experiences are then evidence for God over all possible universes existing.
Boltzmann brains are a problem for even a single universe - the classical “Boltzmann brain” paradox assumes the universe will have some amount of normal life in the “early years” when stars and galaxies will still form, and then only (spectacularly rare) Boltzmann brains in the later years after all matter has decayed. But since the early years are finite and the later years (potentially) infinite, there will be more Boltzmann brains than normal life.
I think of this as one of many paradoxes of infinity. But I don’t think there’s an additional paradox around fine-tuning or the multiverse. Among universes still in their “early” phase of having matter and stars, Boltzmann brains are less likely than real universes that got the fine-tuning right.
I’m having trouble finding any “official” calculation of the exact likelihood of Boltzmann brains, but Wikipedia cites an unsourced calculation that our universe should get one every 10^500 years. Since our universe is about 10^10 years old, that means a 1 / 10^490 chance of a Boltzmann brain during our universe’s history so far.
Suppose there are about 10^10 observers per “real” inhabited universe-lifetime (this is probably a vast underestimate - it’s about the number of humans who have ever lived, so it’s ignoring aliens and future generations). This suggests you need 10^500 universe-lifetimes to create enough conscious observers (via Boltzmann brain) to equal one “real” universe.
But the most-cited estimate for the fine-tunedness of the universe is 10^229, so observers in fine-tuned universes should still be centillions of times more likely than Boltzmann brains.
Both of these numbers are extremely made up, but this is the calculation you’d have to do if you wanted to argue that Boltzmann brains were counterevidence to the multiverse. In the absence of someone doing this calculation convincingly and showing it comes out against the multiverse, I don’t think the counterargument really stands.
I think people think it’s devastating because they’re confusing it with an older argument, from back before Big Bang theory, when people thought maybe the entire universe arose as a Boltzmann fluctuation. Here people objected that it’s more likely for a single brain to arise as a fluctuation than for the whole universe to do so. But Tegmark’s theory doesn’t claim that universes arise as Boltzmann fluctuations, so it’s possible for universes to be more likely than Boltzmann brains.
Another commenter, Gabriel, links a paper questioning whether Boltzmann brains are possible - though remember that if we’re positing a multiverse then the borders of “possible” have to expand beyond our current laws of physics.
Xpym writes:
I think you're conflating two things - mathematical objects are logically necessary in the abstract game we play within our minds, where initial axioms and rules of inference are accepted by fiat. But MUH posits that math "exists" independently of our minds, which is far from uncontroversial, let alone logically necessary.
I agree this is a strong attack on MUH, but I also think you can sort of just . . . sidestep it?
Tolkien has a prologue where all of the archangels sing of the universe, and then God decides He likes it and gives it the Secret Fire that transforms it from mere possibility into existence.
I think of MUH as claiming that there is no Secret Fire, no difference between possibility and existence. We live in a possible world. How come we have real conscious experiences? Because the schematic of Possible World #13348 says that the beings in it have real conscious experiences. Just as unicorns don’t exist (but we can say with confidence that they have one horn), so humans don’t have any special existence of the sort that requires Secret Fires (but we can say with confidence that they are conscious).
Isn’t this crazy? I think of the Mandelbrot set as a useful intuition pump. A refresher: the Mandelbrot set comes from an extremely simple rule - watching how the function z^2 + c diverges in the complex plane. Make some artistic design decisions, and the graph looks like this:
Where did all of that come from? It was . . . inherent in the concept of z^2 + c, I guess. Somehow lurking latent in the void. Does the Mandelbrot set “exist” in a Platonic way? Did Iluvatar give it the Secret Fire? Can you run into it on your way to the grocery store? None of these seem like very meaningful questions to me, I don’t know.
If some weird four-dimensional Mandelbrot set somehow encoded a working brain in it somewhere, is there something that it would be like to be that brain, looking out from its perch on one of the spirals and gazing into the blue depths beyond?
Lucian Lavoie writes:
I think the biggest flaw with Tegmark's argument is that consciousness just doesn't exist.
That's not a fatal flaw though; it's easy enough to just say any given object must exist within a universe complex enough to allow for its generation and subsistence. No experience necessary, and including it only muddles the conversation.
Lots of people had opinions about consciousness here, but I used it only as a shorthand. I think you can reframe this theory without talking about consciousness at all. Imagine a world where some bizarre process produced intelligent robots without any consciousness. These robots might have been imbued by the random process that created them with some specific goal (like creating even better robots), and in service of that goal, they might exchange messages with each other to communicate their insights about the universe (without “understanding” these messages in a deep way, but they could still integrate them into their future plans). These messages might include things like:
“It seems like our universe is sufficiently fine-tuned that robots can come to exist in it.”
“We find ourselves on planet Robonica VII, rather than as Boltzmann brains floating in the void. It seems like it’s not wildly impossibly uncommon for beings to exist in this way.”
“Consciousness” is a useful shorthand for discussing these insights so that we don’t have to talk about planets full of robots every time we want to have a philosophical discussion, but I don’t think anything in this discussion hinges on it.
dsteffee writes:
Why can't you make a random draw from an infinite set?
I messed up my terminology here, although luckily most people figured out what I meant. The correct terminology (thanks /r/slatestarcodex commenters) is that you can’t make a uniform random draw from a set of infinite measure.
Imagine trying to pick a random number between one and infinity. If you pick any particular number - let’s say 408,170,037,993,105,667,148,717 - then it will be shockingly low - approximately 100% of all possible numbers are higher than it. It would be much crazier than someone trying to pick a number from one to one billion and choosing “one”. Since this will happen no matter what number you pick, the concept itself must be ill-defined. Reddit commenter elliotglazer has an even cuter version of this paradox:
» “The contradiction can be made more apparent with the "two draws" paradox. Suppose one could draw a positive integer uniformly at random, and did so twice. What's the probability the second is greater? No matter what the first draw is, you will then have 100% confidence the second is greater, so by conservation of expected evidence, you should already believe with 100% confidence the second is greater. Of course, I could tell you the second draw first to argue that with 100% probability, the first is greater, contradiction.”
When I said you could do this with some sort of simplicity-weighted measure, I meant something like how 1/2 + 1/4 + 1/8 + … = 1. Here, even though you are adding an infinite number of terms, the sum is a finite number. So if you can put universes in some order, let’s say from simplest to most complex, you could assign the first universe measure 1/2, the second universe measure 1/4, the third universe measure 1/8, and so on, and the sum of their measure would be 1. Then you just draw a random number between 0 and 1 and see which universe it corresponds to (ie if you got 0.641, then since this is between 1/2 and 1/2+1/4, it corresponds to universe #2).
EigenCat writes:
But there are objective measures of simplicity! They come from information theory. It's the information content of the rules and initial conditions in bits, or else their Kolmogorov complexity (how many bits you need for a program that generates these rules and initial conditions). Of course there's still the question of which *exact* measure we use, but that's very different from saying we don't have an objective simplicity metric at all. (And yes, God has much more complexity based on this metric, because you'd need to fully specify the God's being - basically fully specify a mind, in sufficient detail to be able to predict how that mind would react to *any* situation, and that's way more complex than a few rules on a chalkboard.) Anyway, the bigger question for me is WHY does in need to be weighed specifically by simplicity (of all possible criteria) in the first place : )
I am really out of my depth talking about information theory, but my impression was that this is a useful hack, but not perfectly objectively true, because there is no neutral programming language, no neutral compiler, and no neutral architecture.
Kolmogorov complexity of statements is sometimes regarded as language-independent, because there’s a low bound on how much language can matter. But even this practically-low bound is philosophically confusing: since the universe actually has to implement the solution we come up with, there can’t be any ambiguity. But how can the cosmos make an objective cosmic choice among programming languages? This is weird enough that it takes away from the otherwise-impressive elegance of the theory.
But also, you can design a perverse programming language where complex concepts are simple, and simple concepts are complex. You can design a compression scheme where the entirety of the Harry Potter universe is represented by the bit ‘1’. Now the Harry Potter universe is the simplest thing in existence and we should expect most observers to live there. This is obviously a ridiculous thing to do, but why? Maybe because now the compiler is complex and unnatural, so we should penalize the complexity of language+compiler scheme? But without knowing what the system architecture is, it’s hard to talk about the size of the compiler - and in this case, we’re trying to pretend that we’re running this whole thing on the void itself, and there is no system architecture!
All of this makes me think that although Kolmogorov complexity gestures at a solution, and makes it seem like there should be a solution, nobody has exactly solved this one yet.
kzhou7 writes:
Though nobody can disprove this hypothesis, there's a reason a lot of physicists dislike it: if it were actually seriously believed, at any previous point in the history of physics, it would have stopped scientific progress.
1650: why does the Earth orbit the Sun the way it does? Of course, because it's a mathematically consistent possibility, ellipses are nice, and we'd be dead if it didn't! What more is there to say? But actually it was Newton's law of gravity.
1875: why has the Sun been able to burn for billions of years, when gravitational energy would only power it for millions? It must be because otherwise, we wouldn't have had time to evolve! But actually it was nuclear energy.
1930: why is the neutron so similar in mass to the proton? Obviously, it is because otherwise complex nuclei wouldn't be stable, so you couldn't have chemistry and we wouldn't exist. But actually it was because they're both made of three light up/down quarks.
1970: why don't protons decay? You dummy, it's because otherwise the Earth would have disintegrated by now! But actually it was because baryon number conservation is enforced by the structure of the Standard Model.
From the physicist's perspective, both "God did it" and "anthropics did it" communicate the same thing: that investigating why the universe is the way it is, is a waste of time.
I think this is false. Tegmark's version of the anthropic principle says things should be as simple as possible, preferably fit on a chalkboard. If you tried to put "Earth orbits sun in an ellipse" to something that on a chalkboard, you'd run into trouble defining "Earth" and "Sun", and if you tried to do it rigorously you would end up with something like gravity. Or even if you didn't, explaining orbits and tides with the same thing would be simpler than using an equation for both of them.
The anthropic principle weakly suggests that somewhere there might be things that can't be fully explained in terms of other things, but the alternative (everything can be explained in an infinite regress, so that for each level there's always a lower one) is absurd.
Comments From Bentham’s Bulldog’s Response
Bentham’s Bulldog wrote a response, Contra Scott Alexander On Whether Tegmark’s View Defeats Most Theistic Arguments.
He starts by listing some proofs of God that MUH doesn’t even pretend to counter. I agree I was sloppy in saying MUH defeated “most” proofs of God’s existence, since proofs (like universes) are hard to enumerate and weigh precisely. I think it defeats a majority of the mentions of proofs that I hear (that is, each proof weighed by the amount it comes up in regular discourse), but that could be a function of the discourse more than of the state of apologetics.
Bulldog mentions consciousness, psychophysical harmony, and moral knowledge as proofs he especially likes which MUH doesn’t even begin to respond to. I agree consciousness is the primary challenge to any materialist conception of the universe and that I don’t understand it. I find the moral knowledge argument ridiculous, because it posits that morality must have some objective existence beyond the evolutionary history of why humans believe in it, then acts flabbergasted that the version that evolved in humans so closely matches the objectively-existing one. I admit that in rejecting this, I owe an explanation of how morality can be interesting/compelling/real-enough-to-keep-practicing without being objective; I might write this eventually but it will basically be a riff on the one in the Less Wrong sequences.
Psychophysical harmony is in the in-between zone where it’s interesting. The paper Bulldog links uses pain as its primary example - isn’t it convenient that pain both is bad (ie signals bodily damage, and evolutionarily represents things we’re supposed to try to avoid) and also feels bad? While agreeing that qualia are mysterious, I think it’s helpful to try to imagine the incoherence of any other option. Imagine that pain was negatively reinforcing, but felt good. Someone asks “Why did you move your hand away from that fire?” and you have to say something like “I don’t know! Having my hand in that fire felt great, it was the best time of my life, but for some reason I can’t bring myself to do this incredibly fun thing anymore.” And it wouldn’t just be one hand in one fire one time - every single thing you did, forever, would be the exact opposite of what you wanted to do.
It sounds prima facie reasonable to say qualia aren’t necessarily correlated with the material universe. But when you think about this more clearly, it requires a total breakdown of any relationship between the experiencing self, the verbally reporting self, and the decision-making self. This would be an absurd way for an organism to evolve (Robert Trivers’ work on self-deception helps formalize this, but shouldn’t be necessary for it to be obvious). Once you put it like this, I think it makes sense that whatever qualia are, evolution naturally had to connect the “negative reinforcement” wire to the “unpleasant qualia” button.
(why think about this in terms of evolutionarily-controlled wires at all? Consider people with genetic pain asymbolia. “What, did the hand then of the Potter shake?”)
But aside from these, he also had some objections to Tegmark in particular:
One thing that Scott did not mention but could have is that the Tegmark view explains the anthropic data. On the Tegmark view, the number of people that exist would be the biggest number of people there could be! That gives you enough people to explain the fact that you exist (if, as I suggest, you’re likelier to exist if more people exist, and should thus think the number that exists is the most that it could be, the Tegmark view accommodates that). But I think the Tegmark view has various problems and cannot explain most of the evidence favoring theism.
The biggest problem for the view is that it collapses induction (a while ago Scott and I had a lengthy back and forth about this). On the Tegmark view, there are unsetly many people with every property: because there are infinite mathematically describable worlds like ours until one second but that turn to jello or a pile of beans one second from now. But there’s no reason to think we’re not in such a world. There are infinite in each case.
Now, the reply given by proponents of the Tegmark view is that the simpler worlds exist in great numbers (I’m about to plagiarize myself FYI—I’m funky like that!). The problem is that it doesn’t make much sense to talk about greater numbers of worlds unless one is a bigger cardinality than the other. The way infinities are measured is by their cardinality—that’s determined by whether you could put the members of the infinite set in one to one correspondence. If you have five apples, and I have five bananas, they’re sets of the same size, because you can pair them 1:1.
Often, infinities can be the same cardinality even if one seems bigger than the other. For instance, the set of all prime numbers is equal in size to the set of all natural numbers, because you can pair them one to one: you can pair 1 with the first prime, 2 with the second prime, 3 with the third prime, and so on.
Crucially, even if deceived people are rarer and non-deceived people are common, the number (measured by cardinality) of deceived people will be the same as the number of non-deceived people. To see this, suppose that there are infinite galaxies. Each galaxy has 10 billion people who are not deceived and just one person who is deceived. Intuitively you’d think that there are more non-deceived people than deceived people.
This is wrong! There are the same number. Suppose the galaxies are arranged from left to right, with a leftmost galaxy but no rightmost galaxy. Imagine having the deceived people from the first 100 trillion galaxies move to the first galaxy (containing 10 billion deceived people). Next, imagine having the next 100 trillion galaxies move to the second galaxy. Assuming you keep doing this for all the people, just by moving the people around, you can make each galaxy have 100 trillion people who are deceived and only 10 billion who aren’t deceived. So long as the number of deceived people is not a function of where the people are located, it’s impossible to hold that there are more deceived people than non-deceived people based on the fact that deceived people are rarer than non-deceived people. How rare deceived people are can be changed just by moving people around.
That is, suppose that there are one billion real people for every Boltzmann brain. If there are infinite universes, then the ratio becomes one-billion-times-infinity to infinity. But one billion times infinity is just infinity. So the ratio is one-to-one. So you should always be pretty suspicious that you’re a Boltzmann brain. The only way you can ever be pretty sure you’re not a Boltzmann brain is if nobody is a Boltzmann brain, presumably because God would not permit such an abomination to exist.
I’ve talked about this with Bulldog before, and we never quite seem to connect, and I worry I’m missing something because this is much more his area of expertise than mine - but I’ll give my argument again here and we can see what happens.
Consider various superlatives like “world’s tallest person”, “world’s ugliest person”, “world’s richest person”, etc. In fact, consider ten categories like these.
If there are a finite number of worlds, and the average world has ten billion people, then your chance of being the world’s richest person is one-in-ten-billion.
But if there are an infinite number of worlds, then your chance is either undefined or one-in-two, as per the argument above.
But we know that it’s one-in-ten-billion and not one-in-two, because in fact you possess zero of the ten superlatives we mentioned earlier, and that would be a 1-in-1000 coincidence if you had a 50-50 chance of having each. So it seems like the universe must be finite rather than infinite in this particular way.
But both Bulldog and I think infinite universes make more sense than finite ones. So how can this be?
We saw the answer above: there must be some non-uniform way to put a measure on the set of universes, equivalent to (for example), 1/2 + 1/4 + 1/8 + … Now there’s a finite total amount of measure and you can do probability with it again.
This isn’t just necessary for Tegmark’s theory. Any theory that posits an infinite number of universes, or an infinite number of observers, needs to do something like this, or else we get paradoxical results like that you should expect 50-50 chance of being the tallest person in the world.
So when Bentham says:
The simplest version of the Tegmark view would hold simply that all mathematical structures exist. But this implies that you’d probably be in a complex universe, because there are more of them than simple universes. To get around this, Tegmark has to add that the simpler universes exist in greater numbers. I’ll explain why this doesn’t work in section 3, but it’s clearly an epicycle! It’s an extra ad hoc assumption that cuts the cost of the theory.
… I disagree! Not only is it not an epicycle artificially added to the Tegmark theory, but Bulldog’s own theory of infinite universes falls apart if he refuses to do this! The fact that everything with Tegmark works out beautifully as soon as you do this thing (which you’re already required to do for other reasons) is a point in its favor.
But I would also add that we should be used to dealing with infinity in this particular way - it’s what we do for hypotheses. There are an infinite number of hypotheses explaining any given observation. Why is there a pen on my desk right now? Could be because I put it there. Could be because the Devil put it there. Could be because it formed out of spontaneous vacuum fluctuations a moment ago. Could be there is no pen and I’m hallucinating because I took drugs and then took another anti-memory drug to forget about the first drugs. Luckily, this infinite number of hypotheses is manageable because most of the probability mass is naturally in the simplest ones (Occam’s Razor). When we do the same thing to the infinity of possible universes, we should think of it as calling upon an old friend, rather than as some exotic last-ditch solution.
Finally, I admit an aesthetic revulsion to the particular way Bentham is using “God” - which is something like “let’s imagine a guy with magic that can do anything, and who really hates loose ends in philosophy, so if we encounter a loose end, we can just assume He solved it, so now there are no loose ends, yay!” It’s bad enough when every open problem goes from an opportunity to match wits against the complexity of the universe, to just another proof of this guy’s existence and greatness. But it’s even worse when you start hallucinating loose ends that don’t really exist so that you can bring Him in to solve even more things (eg psychophysical harmony, moral knowledge). If there is a God, I would like to think He has handled things more elegantly than this, so that we only need to bring Him in to solve one or two humongous problems, rather than whining for His help every time there’s a new paradox on a shelf too high to reach unassisted.
Comments On Philosophical Points, And Getting In Fights
Adrian writes:
I don't get it. What's the point of this? Is any of that even remotely falsifiable? Does this hypothesis make any predictions that can ever be observed? If not, it's not a theory, merely intellectual navel-gazing, and it cannot tell us anything about the nature of our reality.
Joshua Greene writes:
Are there any falsifiable predictions from this approach? I'm not talking about meta-level ("no theist will be convinced.")
People need to stop using Popper as a crutch, and genuinely think about how knowledge works.
Falsifiability doesn’t just break down in weird situations outside the observable universe. It breaks down in every real world problem! It’s true that “there’s no such thing as dinosaurs, the Devil just planted fake fossils” isn’t falsifiable. But “dinosaurs really existed, it wasn’t just the Devil planting fake fossils” is exactly equally unfalsifiable. It’s a double-edged sword! The reason you believe in dinosaurs and not devils is because you have lots of great tools other than falsifiability, and in fact you never really use the falsifiability tool at all. I write a bunch more about this here and here.
Every observation has an infinite number of possible explanatory hypotheses. Some of these could be falsifiable - but in practice you’re not going to falsify all infinity of them. Others aren’t falsifiable even in principle - for example, you may be dealing with a historical event where archaeologists have already dug up all the relevant pottery shards and all other evidence has been lost to time.
What we really do when debating hypotheses isn’t wait to see which ones will be falsified, it’s comparing simplicity - Occam’s Razor. Which is more likely - that OJ killed his wife? Or that some other killer developed a deep hatred for OJ’s wife, faked OJ’s appearance, faked his DNA, then vanished into thin air? Does this depend on the police having some piece of evidence left in reserve which they haven’t told the theory-crafters, that they can bring out at a dramatic moment to “falsify” the latter theory? No. Perhaps OJ’s defense team formulated the second-killer theory so that none of the evidence presented at the trial could falsify it. Rejecting it requires us to determine that it deserves a complexity penalty relative to the simple theory that OJ was the killer and everything is straightforwardly as it seems.
Falsifiability can sometimes be a useful hack for cutting through debates about simplicity. If the police had held some evidence in reserve, then asking OJ’s defense team to predict it using the second-killer theory might strain their resources (or it might not - see the garage dragon parable). But when we can’t use the hack, we can just hold the debate normally.
Tup99 writes:
There's one very important point of clarification that is missing, which has thrown me off from understanding the point of this post.
The title suggests that Tegmark has defeated most proofs of God. But AFAICT, it's actually more like: "If Tegmark's hypothesis is true, then it defeats most proofs of God." And doesn't mention any evidence for this hypothesis (that existing in possibility-space is enough for a being to in fact be experiencing consciousness) being true.
You can defeat a proof with a possibility claim. For example, if you claim to have proven that all triangles are greeblic, and I show out that you only demonstrated this for equilateral triangles, but forgot to demonstrate it for isoceles triangles, then your proof fails. I don’t have to prove that isoceles triangles aren’t greeblic for your proof to stop working.
People bring up the fine-tuning argument as a proof of God. If I show that other things can create fine-tuning, then God is no longer proven. This doesn’t mean God definitely doesn’t exist. It just means that we’re still uncertain.
(and your exact probability should depend on which solution to the fine-tuning problem etc you find more plausible)
Ross Douthat writes:
Okay, but earlier this month, Ross published an article, My Favorite Argument For The Existence Of God, where he talked about how the multiverse objection to the fine-tuning argument failed because it didn’t explain why physical law was so comprehensible. But Tegmark’s mathematical universe hypothesis does explain why physical law is comprehensible. In the original post, I described this as:
Argument from comprehensibility: why is the universe so simple that we can understand it? Because in order for the set of all mathematical objects to be well-defined, we need a prior that favors simpler ones; therefore, the average conscious being exists in a universe close to the simplest one possible that can host conscious beings.
I don’t understand how someone writes an article saying that multiverse can’t answer the comprehensibity objection, reads someone else explain how a version of multiverse answers the comprehensibility objection, and then gets salty because they’ve already heard of the multiverse theory. If you already understood Tegmark’s theory, why did you write an article saying you didn’t know of good answers to the question which it was designed to answer?
I’m not even claiming to be novel! I don’t even know if Max Tegmark claims to be novel! Mock us all your want for being boring and stale and unfashionable, just actually respond to our boring/stale/unfashionable points instead of continuing to act like they don’t exist!
Shankar Sivarajan writes:
Yeah, this is basically Plato.
Michael L Roe writes:
2010? I’ve recently been asking DeepSeek about René Descartes and Gottfried Leibniz. Someone could have said most of that in 1710…Why is there something rather than nothing? Is straight out. Of Leibnitz’s Principles of Nature and Grace, which we can now read as being about Artificial Intelligence.
Oliver writes:
Should we refuse to eat Beans?
I find this kind of thing annoying too, sorry. “Oh, this new idea is basically just reinventing Plato. And also Descartes and Leibniz. And Pythagoras. All of whom were just reinventing each other, or whatever.”
If anything to do with the Ideal reminds you of Plato, and anything to do with the Real reminds you of Aristotle, then you can dismiss any idea as either “just reinventing Plato” or “just reinventing Aristotle”. This is the intellectual equivalent of those journalists who would write articles on Uber saying “These Silicon Valley geniuses don’t realize that they’ve just reinvented the taxi!”
Kenny Easwaran writes:
It’s a lot like David Lewis’s modal realism (from his 1984 book On the Plurality of Worlds) and has something in common with Mark Balaguer’s plenitudinlus platonism (from his 1998 book Platonism and Anti Platonism in Mathematics) but it’s a bit different from either. I suspect some of the medievals and ancients had some related idea. But until the development of 20th century logic there wasn’t a clear conception of what “every consistent mathematical theory” means, and it would likely take an analytic philosopher to endorse such a blunt view that this is everything that exists.
Whatever, I give this one a pass, at least he picked someone other than Plato and Aristotle.
Rob writes:
Love your blog, love the content, only superficially considered the arguments, but I agree with commenters saying there are pretty odd assertions in here.
My goodness! Odd assertions? In an ACX post? What a disaster! Somebody must go tell the Queen!
Jumping in before I've read the full post, but Bentham's Bulldogs comments about cardinality are incorrect -- the way you described things is closer to correct (although I do wish you would learn some actual math and stop mangling things :P ).
It is *not*, in fact, the case, that in mathematics we measure the size of an infinite set solely by its cardinality. Rather, cardinality is *one* way of measuring the size of an set, that can be used as appropriate. For a subset of the plane, one might use area. For a subset of the whole numbers, one might use natural density. For sets equippped with a well-ordering you'd use ordinals, not cardinals. Etc, etc.
Usually cardinality is not a very helpful measure when dealing with infinite sets, in fact, because it's so crude and lossy. (A rectangle of area 2 and of area 1 have the same cardinality, but they still have different areas!) I'd say one advantage of cardinality is that it can be applied to literally any set, regardless of context, whereas other measures will have a more limited domain of application; but as advantages go that's generally not a very relevant one. Most mathematicians aren't set theorists!
If someone says to you that in math the size of an infinite set is measured solely by cardinality, you can tell they haven't actually done much math involving infinite sets!
So is BB right or not when he claims that if every world has one billion real people and one Boltzmann brain, and there are an infinite number of worlds, the chance of being a Boltzmann brain isn't one-in-a-billion, it's 50-50?
I'm saying he's wrong, yes, or at least not necessarily right. Cardinality is usually not the right way to think about things outside of finite cases.
He's wrong.
The number of natural numbers that are divisible by one billion is aleph_0 (that's a cardinality). The number of natural numbers that aren't divisible by one billion is also aleph_0. It emphatically does not follow that the probability that a natural number is divisible by a billion is equal to the probability that a natural number is not divisible by a billion!
Comparing cardinalities to get probabilities doesn't make any sense, and isn't something a mathematician would do. (The fact that the cardinalities are equal has other consequences, notably that you can make a 1-1 pairing between the numbers that aren't divisible by a billion and the numbers that are, that does't leave any number out. Or equivalently, a 1-1 pairing between the regular brains and the Boltazman brains. In fact, this 1-1 pairing is pretty much the definition of "cardinality".)
Saying the probability is one-in-a-billion is intuitive, but I don't know of a mathematically rigorous way to obtain it. It's tricky because the "probability" that you get changes if you group the numbers/brains differently.
An example where trying to compare infinities breaks particularly badly: https://www.philosophyetc.net/2006/03/infinite-spheres-of-utility.html
I disagree that he is wrong. He's "not even wrong": there is no question here to which the notion of probability can usefully apply.
You can have a sigma algebra and a probability measure on any set (eg trivial sigma algebra), how well that models reality is another question though.
It seems odd that you can use a random draw from an uncountably infinite set to simulate a random draw from the countable infinite set of integers. Very interesting though. Maybe the entire conversation is just suffering from lack of crazy advanced math courses.
Also as long as I'm jumping in with early comments, I was going to link to Sarah's old Twitter thread about how a lot of the claims about pain asymbolia are likely wrong and the whole thing is probably misdescribed, but she appears to have deleted it. Well -- go ask Sarah about pain asymbolia. :P
Ross Douthat has gotten "refuting proofs of the existence of God" and "arguing for the nonexistence of God" confused, huh? :-/
i am not so sure it has been his singular or greatest confusion
Upgrading Scott's conversion date to 2030. Probabilities remain 70% Catholic, 20% Orthodoxy, 10% other (mostly likely very high church Protestant). I would put Orthodoxy higher but I don't think he will want to give up on Scholasticism.
Simplicity is often used to assert higher probability in "reality". In fact, the Occam's razor is only a guide for humans on how to pick a model worth testing, mostly because it is easier to work with and test, not because it is likely to be more "real".
There have been some successes with simpler models, but mostly as stepping stones to something much more complicated. There have been plenty of failures. In fundamental physics nearly all "simple" extensions or replacements of the Standard model of particle physics have experimental consequences that contradict observations. Same happened with all known extensions of General Relativity.
If you are a researcher and look critically over your own area, you will notice that "simple" is not a good approximation of "accurate".
I think you're talking about some sort of vague philosophy-of-science political debate. On a mathematical/technical level, which I think is what we're doing here, simpler simply *is* more probable, that's how math works.
Very simple example is that "the first object I will pick out of this hat is blue" is more probable than "the first object I will pick out of this hat is a blue sphere between 1 and 2 inches in size".
Slightly more complicated example: suppose that I am rolling ten d20s, and I will declare success if EITHER the red dice comes up 20, OR the blue dice comes up 5 and the yellow dice comes up 3 and it's raining outside. I declare success. Which is more likely - that the red dice came up 20, or that the conjunctive thing was true?
I realize this seems cheating because I'm using easily-quantified things like dice rolls, but I think the same principle extends to everything else. The reason "OJ killed his wife" is more likely than "An international spy killed OJ's wife, then used supertechnology to fake the DNA" is because it's the conjunction of p(international spy) * p(has supertechnology) * p(wanted to do this), and we can expand each of those into more complex propositions in turn.
I'm probably not doing a great job explaining this - https://www.readthesequences.com/A-Technical-Explanation-Of-Technical-Explanation is slightly better although it's not really focusing on this question.
There are multiple justifications of occams razor.,some more theoretical, some more practical.
The tautologous one is that if you have a theory with N premises whose individual plausibility is unknown, their conjunction is going to be more plausible than the theory with N+1. That's not just methodological.
It's also very impractical , because you usually do have some information about how likely your premises are. But the problem with the entirely methodological approach is that you haven't solved the basic question at all ..you are considering the simpler hypotheses first because you must, not because they are more plausible.
I wish you'd been able to work in a discussion of the Ontological Proof (that the definition of God necessitates his existence) somewhere, since it seems to have a lot of similarities to the Tegmark theory (or at least my vague understanding of it from your post) that mathematical truths must necessarily have existence. You have to carefully steelman the Ontological Proof and appreciate its nuances to get any enjoyment out of it, otherwise it just sounds silly; there are some non-obvious arguments for it that evade the obvious arguments against it. [I write not as someone who believes that "Proof" to be valid, but as one who was favorably impressed by reading nuanced versions of it deployed by smart people who knew what they were doing and weren't being ridiculous.]
If Tegmark's theory includes universes where for example c (speed of light) takes every value in some real interval like [2*10^5, 5*10^5] km/s then that means that the set of existing universes has a cardinality of at least continuum. In this case you can't order the universes in a sequence with first, second, third element, etc.. (this is proven by Cantor's famous diagonalisation argument). However, I don't think this hurts your case at all as you can still have a non-uniform measure, and this is just a small technicality as far as I can see.
> Boltzmann brains are a problem for even a single universe
They *can* be. My response is that clearly our understanding of the universe is wrong in some subtle way and there are no Boltzmann brains. Maybe something about the expansion of the universe causes their probability to keep decreasing, so there's only a finite (and very small) probability of being a Boltzmann brain. If they do exist, any basis for understanding our current universe is wrong, so either they don't exist and we have a model that's very accurate except for t he far future, or they do exist and we have no idea what universe we're in.
Okay, here's the deal with the Boltzmann brain stuff.
Firstly, typo note: the estimate from wikipedia is 10^(10^50) years, not 10^500. The first number is vastly larger than the second.
Boltzmann brains are not only a problem even for just one infinitely long-lasting universe, but even for just one universe that lasts a finite amount of time (before repeating itself). So long as the amount of time it lasts for is exponentially large, then we might run into problems. But we could assume that for one reason or another, our universe is one where Boltzmann brains are not possible, maybe due to undiscovered physics.
Okay, what about a Tegmark multiverse of finite universes? Say that to try and get around some of the paradoxes of having an infinite number of observers, we make a rule that universes in Tegmark's mathematical multiverse can only do a finite amount of computation. If we represent them as Turing machines, each machine must halt. Due to the absurdly fast-growing busy-beaver function, the maximum number of observers in a universe of a given complexity grows way faster than the complexity penalty. So we can't just sample from all observers in the multiverse. It's not that most observers are Boltzmann brains, it's that such a sampling process is mathematically undefined.
But, if we first sample a universe at random, then sample an observer from that universe, we can see that Nevin's objection fails to correctly count information. Under any reasonable encoding scheme, all laws of physics of our universe along with all the the "fine tuned" physical constants easily fits in a megabyte. (Each constant is only a few tens of bits.) This is a complexity penalty of 2^(1000000). But this means we only need to make a megabyte of orderly observations before we've got enough evidence to prove that we're not Boltzmann brains.
You might think that you could treat universal Turing machines simulating each other as a kind of Markov process and find a stationary distribution, which would give some notion of a "natural" universal Turing machine, but Mueller showed there isn't one.
https://arxiv.org/pdf/cs/0608095
Your argument about falsifiability seems over-simplified. The issue with arguments for the existence of god is not that it isn't falsifiable, it's that there isn't any evidence either way. The OJ Simpson example that you bring up just drives this point home. Occam's razor is useful there because there was tons of evidence and we need to find a way to differentiate between possible theories.
What exactly is the evidence for or against the existence of god: that we exist and that the universe is complicated? Seems like pretty weak sauce to me. If you count that as evidence then you can claim that just about any ridiculous claim has an empirical basis. For example, I could argue that when I was a baby, I was briefly teleported to an alien spaceship. How do I know? Because I have hair on my legs and this is obviously evidence that aliens planted it there.
On the idea of Boltzmann brains, if we accept that it's possible for a high-entropy universe to spontaneously create a large, complicated conscious entity, it seems reasonable to also accept that such a universe could spontaneously create stable self-replicating structures that begin to eat the surrounding entropy and create an expanding region of simplicity. In this case, most universes would eventually be dominated by relatively low-entropy environments like the one we find ourselves in. The physical constants we observe may have been the result of some kind of natural selection process on sub-quark-level replicators whose behavior gives rise to the physics we see.
> this is probably a vast underestimate - it’s about the number of humans who have ever lived, so it’s ignoring aliens and future generations
and all the sentient non-human animals
"I find the moral knowledge argument ridiculous, because it posits that morality must have some objective existence beyond the evolutionary history of why humans believe in it, then acts flabbergasted that the version that evolved in humans so closely matches the objectively-existing one." Strange didn't know you were a moral anti-realist, but like there are a lot of smart and reasonable ppl who are realists, and taking that view for granted the moral knowledge argument seems very compelling as opposed to "ridiculous", it's just the modus tollens of evolutionary debunking arguments which many anti-realists such as error theorists endorse. But also you said you owe an explanation etc. but it's strange that given how much you discuss ethics etc. you don't seem to have expressed much on your meta ethical foundations. But also I have never understood why anti-realists aren't all just error theorists, like non-cognitivists, subjectivists etc. all just seem to be using language weirdly, sure lay people have confused meta ethical beliefs but like for a more sophisticated anti-realist to avoid orienting their anti-realism towards substantive realism seems weird.
I'm interested also then if you would be an epistemic error theorist or something? that you are clearly logically implying that you find moral realism to be "ridiculous" not the moral knowledge argument/EDA per se. I think the most coherent final Boss of internet anti-realism move would be to just throw out intuitions about normative facts all together, or insist you never had any, but like Rationalist to epistemic error theorist pipeline is pretty cool.
Yeah came here to say something related.
Moral realism is hardly *that* obscure of a “only stupid theists believe this one” sort of moral and philosophical belief to hold.
Yes the moral knowledge argument proceeds on the assumption that it will be convincing to, and only to, people who believe in an objective moral reality.
But there are plenty of pretty robust arguments for objective morality out there; it’s not just endless question-begging.
If you’re not a moral realist when you encounter the argument from moral realism/moral knowledge then sure it’s not going to be convincing, but it doesn’t follow that moral realism is just a baseless view to hold. It’s just a case of differing priors.
I continue to think that too little attention is being given to the super shady idea of a measure on all these necessary mathematical objects. It's a huge bait and switch. "Oh, look, we know all these mathematical objects necessarily exist!"
[later]
"Oh, and there's an extremely non-necessary, arbitrary measure on them -- ignore that that doesn't make any mathematical sense -- that's rigged to give us the right universe since the other idea definitely wouldn't, and that we, as mere portions of *one* of the objects, somehow have access to."
It's highly disreputable just for that reason, in my opinion. Say nothing about its other problems (such as conflating abstract and concrete).
Yeah. It's exactly the same problem that M theory (commonly called string theory) has. It doesn't uniquely identify any particular theory until you constrain it by a cherry picked set of criteria post hoc to match our observations. And that's just bad pool scientifically.
Yeah, to be honest I highly doubt the theory is true, but that's just my intuition, and it's entertaining to see how people argue for and against it.
"If some weird four-dimensional Mandelbrot set somehow encoded a working brain in it somewhere..."
https://xkcd.com/10/
> Imagine trying to pick a random number between one and infinity. If you pick any particular number - let’s say 408,170,037,993,105,667,148,717 - then it will be shockingly low - approximately 100% of all possible numbers are higher than it. It would be much crazier than someone trying to pick a number from one to one billion and choosing “one”. Since this will happen no matter what number you pick, the concept itself must be ill-defined. Reddit commenter elliotglazer has an even cuter version of this paradox:
> » “The contradiction can be made more apparent with the "two draws" paradox. Suppose one could draw a positive integer uniformly at random, and did so twice. What's the probability the second is greater? No matter what the first draw is, you will then have 100% confidence the second is greater, so by conservation of expected evidence, you should already believe with 100% confidence the second is greater. Of course, I could tell you the second draw first to argue that with 100% probability, the first is greater, contradiction.”
--
I think this could be extended to conjecture that it is impossible to *observe* *anything* infinite without minimally collapsing something about that infinite thing. Which would be a useful way to wall off a universe from the rest of a cosmos
I think you miss the point of the Plato recapitulation an Popperian falsifiability arguments.
Tegmark's multiverse is mostly silent on what constitutes a suitable mathematical object capable of existing. Is specifying it in English enough? ZFC? Do the objects need to be computable? Does ultrafinitism have a say? Does the set containing all sets exist? What about other logical impossibilities?
The most detailed specs we have in math these days reduce to formal logic, but even that relies on some unspecifiable meta-logic. I.e. no formal proof verification system can verify the hardware it's running on.
Even worse, if we assume materialism, then all the math we know is implemented by physical processes in this universe. Do we also accept the existence of objects from completely alien maths impossible in this universe due to physical constraints?
A maximally permissive answer to my questions above forces us to accept that any "system" exists, without the possibility for determining what is or isn't a system and not even being picky about impossible things. It's not just a fuzzy border issue. Every object is illogical relative to some logic, especially the simplest one! I.e the logic that permits nothing is arguably the simplest, at least relative to standard set theories like ZF.
Hand waving Kolmogorov or whatever measures onto these things just begs the question by fine tuning the definitions to get whatever result desired. Or said another way, it's a model with tons of tunable parameters and answers only things that are encodable in that free parameter set. I.e. we just created a lookup table.
This is the basic mechanical problem with Platonist metaphysics, IMO. It is simply incoherent, despite first appearances. Popperian falsifiability does point at a utilitarian resolution however. What does Tegmark give us when we play taboo on the concept of real? I.e. can we operationalize what Tegmark's existence claims even mean? Are we dealing with replicable realist things? Non-replicable but consensus reality-like things? Or is it more like how dreams and false memories operate? Etc.
That said, the as an intuition and discussion pump, the Tegmark idea is fun, so hedonistically, I'm all for it.
There is only one universe. That's what the first three letters of "universe" mean.
The universe includes all of space and all of time. It is meaningless to consider whether something "came before", or "will come after", or "exists outside of" the universe, and hence also whether something "caused the universe to exist" or to have certain properties.
i don't think it's even useful to say that the universe has properties. Properties are useful to distinguish one item of a class from another, but there can never be any sense in "distinguishing one universe from another" because there is, has been, will be, and could be only one universe.