The Past Hypothesis: Why did the universe start in a low-entropy state?
I split off this discussion from The role of observers in MWI to avoid further derailing. For those new to the topic:
Quoting Past hypothesis - Wikipedia
Here are the comments that started this discussion:
Quoting Count Timothy von Icarus
Quoting SophistiCat
Quoting Count Timothy von Icarus
Quoting SophistiCat
Quoting Count Timothy von Icarus
Quoting Count Timothy von Icarus
Quoting Past hypothesis - Wikipedia
In cosmology, the past hypothesis is a fundamental law of physics that postulates that the universe started in a low-entropy state,[1] in accordance with the second law of thermodynamics. The second law states that any closed system follows the arrow of time, meaning its entropy never decreases. Applying this idea to the entire universe, the hypothesis argues that the universe must have started from a special event with less entropy than is currently observed, in order to preserve the arrow of time globally.
Here are the comments that started this discussion:
Quoting Count Timothy von Icarus
I have never seen a satisfactory explanation of why we should find ourselves in the world that has increasing entropy
Quoting SophistiCat
As opposed to what? A world at thermodynamic equilibrium?
And what do you mean by explanation here? We are bound to find ourselves in one world or another. How could you explain the fact that the world that we find ourselves in is this one? Explain in terms of what?
Quoting Count Timothy von Icarus
Yes. Since there are more ways to be high entropy than low entropy we should have more worlds with high entropy than low. So why are we in a low entropy world if it is very statistically unlikely?
Some version of the past hypothesis, right? But then seeing a world where the past hypothesis is true is vanishingly unlikely, even if it occurs with probability 1, according to MWI derivations of the Born Rule.
Weyl curvature arguments are ok here, but both MWI and Cosmic Inflation tend to go for the Anthropic Principle to explain this.
"All possible worlds exist. Obviously we exist and our world is possible. And we can only exist in some narrow band of worlds in terms of initial conditions."
The obvious problem here is that this makes explanations from physics trivial. "Anything observed is physically possible and anything possible occurs so of course you see x even if x is a 1 in quadrillion event," is just "if you see it, it is possible, so it is."
All explanations about how stars, planets, life, etc. evolved in our particular history, the how and why of science, gets fobbed off onto "initial conditions, all of which are true."
Obviously, some cosmologists find this answer very deep, hence the popularity. However, it essentially reduces to "if it's possible it happens and if you see it, it is possible." This isn't a real answer. The answer we want in non-multiverse theories isn't actually addressed, which reformulated for an "all possibilities exist multiverse," is the question "why did our particular history occur such that we see x."
Also, apparently you should be bothered by extremely unlikely events in your experiments if you believe in MWI, since you should care about the Born Rule... except when it comes to identifying that you are in an incredibly low probability universe. Then, when doing cosmology, it is ok to jettison probabilities when making explanations and resort to "everything has to happen with p = 1."
If you follow the epistemological logic used for cosmology at the individual level, you shouldn't be surprised when jumping in front of a train doesn't kill you or standing in front of a firing machine gun leaves you unscathed (ala Tegmark's quantum suicide set up). If you were dead, you wouldn't see anything. Even if being alive is incredibly unlikely, that's all you're going to see, and so the anthropic principle, applied on the individual level, says there is nothing at all notable about throwing yourself into a volcano and surviving, etc.
It seems to me that either low probability events should always be surprising and make us ask questions or they never should, not a too cute mix of both. Just bite the bullet and say the Born Rule is meaningless, a total illusion, in that case.
Quoting SophistiCat
I am surprised that you went for this explanation, given what you said above about frequentist explanations. This is a textbook case where statistics does not apply because it simply does not exist.
Your reasoning applies to an ergodic system that has been evolving for a long time, or an equivalent ensemble. But the early universe is nothing like that. If there is no explanation for the past hypothesis (we don't have a good theory of the universe's origin), then it makes no sense to talk about how likely or unlikely it is, because the universe was and still is far from ergodic, it hadn't been evolving for a long time (ex hypothesi), and we don't have an ensemble (unless some kind of a multiverse theory is true, but that is still very speculative, so we can't take it as given).
Quoting Count Timothy von Icarus
Frequentism seems fine in some contexts, or at the very least, it is at times much easier to explain things in that frame when it doesn't make a material difference. A closer look would indeed show problems with my example. If you accept determinism at all levels of reality, then of course there is only one way a system can evolve, the very way it does evolve, and entropy has to be framed as somehow subjective, or at least relational. The problem with frequentism IMO is that it is generally the only way of understanding probability theory that is taught, and is incoherent when applied to some situations.
Quoting Count Timothy von Icarus
You are correct about the nature of the Past Hypothesis; that's a fine answer, but it isn't the argument I get frustrated with. By definition, there are more ways to be in a high entropy state than a low entropy state. Perhaps there is indeed a mechanism at work in the early universe that makes a later low entropy state counterintuitively more likely than a high entropy one. But, barring support for that fact, we are left with the principal of indifference, and this suggests that we weight all options equally, combinatorially if there are a finite number of states. That is, giving equal likelihood to all potential universes of X mass energy existing in an early state with all possible levels of entropy, the high entropy universes outnumber the low entropy ones, barring some other sort of explanation. Appeals to the Anthropic Principle don't address this. The same issue comes up with the Fine Tuning Problem; if we don't know the likelihood of values for constants, indifference should prevail.
If this is the case, then it remains that high entropy states outnumber low entropy ones, and remain more likely.
Positing some as of yet not understood mechanism by which this is not the case is fine, after all, we have empirical evidence that the entropy of the early universe was low (counterintuitively despite being near equilibrium, wrapping your head around negative heat is a doozy). The problem I was addressing is using the Anthropic Principle to address this rather than any appeals to the probability of any observation of the state of the early universe. This is where the problem of triviality comes up.
I think this is a similar problem to that of claims that "everything is explainable in terms of fundemental physics," and then appealing to the black box, brute facts of initial conditions as the origin of many of the most interesting things we'd like the natural sciences to explain. I think the proper response here is: "yes, but we want to know how the particular initial conditions in our past led to X and Y, etc. historically, and if this cannot be explained without appeals to brute facts for a vast array of all natural phenomena, then nature does not reduce to fundemental physics in terms of explanations."
Comments (41)
I am not sure that the principle of indifference can lend us any insight here. When tossing a coin, we assume that the coin has equal chances of landing heads or tails, because we are very familiar with coins and the behavior of objects of that sort. When we cannot assume that for some reason, then all that the principle of indifference does is describe the state of our ignorance: since I know nothing except that there are two possibilities, I have no greater expectation of heads than of tails. But note that, although in the latter case we have derived the same 50% probability, the meaning of this probability is not the same. We did not learn that the coin is fair through the application of the principle of indifference! All we did is learn how to bet in order to avoid Dutch Book-style exploits.
How does this help us understand the origin of the universe? It's a one-off event against the background of complete ignorance, and the outcome of this event is already known. What more can we learn from the principle of indifference? What does "the probability of this event is such and such" even mean in this case?
Quoting Count Timothy von Icarus
(My emphasis.) We haven't really talked about the Past Hypothesis itself, so let's do that now. If you take the currently observable universe and compare its entropy with the entropy of the same region of space as it was shortly after the Big Bang, you will find that the entropy is indeed much larger now (at least if we restrict ourselves to very short time horizons in each case, in which the universe can be considered quasi-static as per classical thermodynamics, because otherwise the meaning of entropy and how it should be calculated becomes unclear). And yet the universe after the Big Bang is typically described as a very uniform "particle soup." It wasn't at equilibrium, because it was quickly expanding, but if, counterfactually, there was no expansion, then the universe would have already been at its maximum entropy (and on very short time scales, during which expansion could be neglected, it was). Expansion added bucketloads of additional degrees of freedom, thus lifting the ceiling on the maximum entropy - and this is how we find ourselves, 14 billion years later, at a much higher entropy and still far from equilibrium. All thanks to the dramatic expansion of space during the time between then and now.
So, was the (relatively) low-entropy state of the early universe very special and unlikely (whatever that might mean)? Frankly, I don't see how.
Likely or unlikely, it was in a particular state, which we have no explanation for.
Quoting Past hypothesis - Wikipedia
Quoting SophistiCat
You seem to be disagreeing with the past hypothesis, in that it wasn't low entropy, but maximum entropy?
Anyhow i'm not sure what the question is relating to this topic, I couldn't figure it from what was quoted, maybe I should read the other topic.
EDIT: I guess the question is not whether it was likely or not, but whether it was low entropy or not (low entropy is unlikely by definition), and that would in turn depend on whether space/the universe itself was expanding (progressively giving more degrees of freedom for matter to be in), or whether it was a low entropy configuration of matter expanding into an already existing larger universe/space. This is probably very basic stuff, but i'm no physicist so excuse my ignorance.
EDIT2: And if it was the first case (the universe itself explanding) than the past hypothesis isn't "matter was in a low entropy configuration", but "the universe was small". Is a small universe likely or unlikely, without another frame of reference, who knows... so I guess I would agree with you. Probabilities only make sense if you have relevant information. And since we don't, it doesn't. What is the likelihood of drawing the ace of spades out of an undefined amount of cards and with undefined types of cards in the deck?
In what sense could a completely undifferentiated cosmos even be said to be ordered though?
I'll have to return to this thread for a more detailed response later, but the early universe has this very confusing property of being in thermodynamic and chemical equilibrium but nonetheless being "low entropy."
This is confusing since most textbooks and classes will lead you to associate equilibrium with high entropy. The simplest explanation, which leaves out a lot of nuance, is that there is also gravitational entropy to be considered, and this being low initially offsets the apparent equilibrium seen in the cosmic microwave background.
But there is a lot more going on. Particles are changing identities incredibly frequently at these energies, the fundemental forces aren't acting like they do normally, the density of particles are changing as the universe expands and temperature shifts. It's a very dynamic model. To make things more confusing, there are arguments that the laws of physics aren't eternal and unchanging, but actually behaved differently in this era.
In any event, some argue that key ways of understanding this period are fundementally flawed, for example: https://www.sciencedirect.com/science/article/abs/pii/S135521980600027X
While others have punted and call the low entropy initial conditions a "natural law," of which there can be no further investigation (provided you agree that natural laws don't demand investigation that is).
I am not disagreeing with the low(er) entropy part. The space that is currently occupied by the observable universe was at a much lower entropy 14 billion years ago (it had better be!) Was it at a maximum entropy? That's a trick question. I would say that, in a limited sense, it was.
Quoting ChatteringMonkey
Ah, see, I actually don't agree that "low entropy is unlikely by definition." That is true of closed systems that have been evolving for some time. As per the 2nd Law of Thermodynamics, the entropy of such systems should be increasing over time. But we are talking about the initial state, which does not have a history. In response to this @Count Timothy von Icarus invoked the principle of indifference. I object that we cannot get a free lunch from the principle of indifference: it cannot teach us anything about the physical world. And conclude that statements about how probable/special/surprising the early universe was are not meaningful absent a theory of the universe's origin that would inform our expectations.
Added to that, (as also notes), the fact that, in absolute terms, the entropy of the early universe was much lower than it is now doesn't tell us all we need to know: there are other factors to consider. And the elephant in the room is that, although we typically assume that the universe is thermodynamically closed, the undisputed fact is that it was never stable: it has been constantly expanding. And that changes everything.
Quoting ChatteringMonkey
:up:
Here is a rough analogy from school thermodynamics. Consider an insulated vessel filled with gas at a thermal equilibrium. This system will remain stable forever and nothing interesting will happen there, as you rightly note (barring Boltzmann brains and such). Now suppose that the walls of the vessel expand outward. The vessel is no longer at equilibrium, even though it started out in an apparent equilibrium state (that is, if we don't take expansion into account).
Yes this I don't understand then I suppose, because isn't equilibrium necessarily maximum entropy... If entropy always increases, it can only be in equilibrium if max entropy has been reached no?
EDIT: Unless max entropy and low entropy are somehow considered the same in this particular instance, i.e. it is still considered low eventhough it is maximum entropy (for instance because compared to entropy in the rest of the history of the universe it is low). But that just sounds like confusing use of terminology to me.
Quoting Count Timothy von Icarus
This sounds like it could be an answer to my question, but I don't know enough about this to judge it to be honest.
The space or the matter in that space is at lower entropy? That is what is confusing to me. How can space itself be measured entropically. Isn't that just the condition that sets the degrees of freedom for matter in that space to be in, determining the range of entropy? Maybe I'm missing something fundamentally here, which very well could be.
Quoting SophistiCat
Yes unlikely by definition maybe isn't true for initial conditions, I can see the reasoning there. It still is an observation (and a condition for our universe to like it is) that the universe was in a low entropic state, it could have been otherwise I suppose... Not that that is saying much. We just don't know anything beyond it, and so it is what it is then, is the best we can do?
Quoting SophistiCat
Yes I definitely agree with that. Anything like the principle of indifference, or anthropic principles, or things along these lines, is making assumptions we have no right to make. And without these kind of assumptions, we don't have enough information to sensibly talk about probabilities.
Right. So did the expansion take place in order to facilitate the evolutionary process? Or was the initial state itself actually metastable? Perhaps the idea of a closed system is inapplicable?
Here is a thought experiment which I think might illustrate why I think the principal of indifference applies here.
Imagine an alien species in another dimension whose development of mathematics and logic are as sophisticated as ours. On their planet is a small magical portal where a "page" of English text appears for few minutes until it is replaced by a new page. You can think of this like an e-reader screen.
The portal is fairly remarkable in that they can't find a way to interact with it aside from reading the text, but also fairly easy to ignore as it doesn't do anything else. However, being a unique phenomenon, there is an entire field of study centered around the portal.
The text being displayed is the collection of all the articles on Wikipedia, and every newspaper, academic journal, and magazine published going back to 1900. It also includes an extremely large collection of non-fiction texts dating from 1900. The portal only displays text and only in one simple font, always of the same size.
What would the scholars studying the portal conclude? They might observe that every shape appears to occupy a fixed square of space on the surface of the portal. While the symbols don't exhibit symmetry, they are all composed of smaller squares (pixels). Some symbols have rotational or chiral symmetry, some don't.
There is only so much to say about the symbols themselves. There are around 100 of them and some resemble each other closely.
How they are arranged is a far different matter. The distribution is highly lopsided. The "Z" symbol appears 176 times less often than the "E" symbol. More unique blocks of text (words) contain the "Z" than the "Q," but the "Q" is more common overall. Some blocks of symbols (words) appear incredibly frequently, others, indeed a majority of all possible combinations, never appear at all, despite not being combinatorially complex (e.g., "El&f$" might never appear despite being just five characters, three of them common). Groupings of blocks also exhibit regularities.
Notably, blocks of symbols follow a power law distribution such that the 100th most common block appears twice as often as the 200th most common, four times as often as the 400th most common word, etc. (a neat feature of all humans languages).
There is significant local diversity in the frequencies of certain symbols. For instance, Q is rare, but when they get some Wikipedia entries on queens and the one on Q Anon in the same week, it leads to dozens of papers being published. A revolution in Portal Studies!
Given their level of technology, our aliens can discover all sorts of mathematical patterns in what comes out of the portal. They can create a chat GPT type bot that reproduces strings very similar to what the portal generates. They might get very good at accurately predicting what the portal produces.
They are likely to note that the portal only produces a very tiny fraction of all the possible combinations it could produce. It appears limited by multiple layers of constraint, in terms of the symbol repitoire at a lower level, and in terms of apparent higher level (grammatical and syntactical) rules. The text is also constrained by the realities of our world (non-fiction) but they can't realize that, nor how our evolution shaped the patterns.
Would it be fair here for the aliens to assume that the arrangement they observe is extremely unlikely barring some sort of underlying logic to the portal's outputs? The portal could produce a far larger arrangement of pages if the constraint on characters of a certain font was removed. Pictures would be possible.
Even if all the symbols appeared with the same exact frequency, but didn't have to be arranged into words, there would be far more possibilities in what the aliens saw. The constraints beg explanation.
There are all sorts of interesting things to consider here. Given the aliens can never know the origin of the patterns, that they are separated from that knowledge by an epistemic (and maybe ontological) barrier, would it be fair for them to assume said patterns are just brute facts? Would it be justifiable to posit that the observed phenomena was some sort of language? Would Humean aliens be justified in saying some grammatical laws "cause" certain clauses in sentences due to constant conjunction?
The portal wouldn't interact with the universe, except in limited ways. I would assume some aliens would make arguments about how it cannot have a cause, and how it simply is. I can even imagine being convinced of this myself. However, I think the aliens arguing that there has to be something driving the striking patterns would have a better argument. Combinatorially, the messages they see are incredibly unlikely, particularly if their universe has a clear begining akin to ours. In this case the portal's outputs would presumably be finite (at least to this point), and so arguments about the apparent order being the result of infinite iterations seem to falter.
----
I used that thought experiment because I think it's a neat idea. More to the point though, there are tons of areas where we essentially have no clue what sort of frequency we should expect for variables. The early universe is in no way epistemicaly unique here. Keynes was thinking of just this sort of scenario when he developed the principle, and Jayne's was thinking of similar cases we he expanded on it with the Principal of Maximum Entropy.
Maybe someone will pull a Quine on these ideas, but these seem grounded in mathematical logic, not the particularities of any particular observation. So, I would apply the concept here.
Which is somewhat aside the point since the position I don't like readily admits that the universe we see before us is unlikely (even if they shouldn't). The Anthropic Principle is invoked to say "yes, what you see is unlikely, but if everything possible happens, then unlikely things always happen and you wouldn't see them if they weren't the case, and so this explains unlikeliness."
My problem with this is that it consigns areas begging for inquiry to the bucket of things we just accept for the trivial reason that it is clearly possible for what we observe to exist. Our aliens might never figure out they are seeing a language, let alone what the messages they observe mean, but I certainly think they can find something out about these "brute facts," which presupposes that they be analyzable using probabilities.
Quoting Count Timothy von Icarus
And... I am afraid I am still not getting your point, even after reading the alien story.
A puzzling phenomenon found in the world is not a good parallel to what we have been discussing. Such a thing would invite the same sort of analysis as we apply to everything else: that is to say, we would try to put it in the context of our accumulated knowledge about the world and try to reduce it to some aspects of that knowledge. (I am loath to get into an argument over an analogy, but briefly, one problem with it is that it biases the story by stipulating that the phenomenon is literally "out of this world" and causally inexplicable. But your hypothetical aliens would have no reason to assume that.)
By contrast, the initial conditions of the Big Bang universe are necessarily inexplicable within the context of the Big Bang theory. Note that I am not saying that they are necessarily inexplicable tout court. It's just that if you have a causal theory, then the very structure of the theory dictates that it unspools backwards from the present observations into the past, and its initial conditions (or conditions at infinity, as the case may be) are where the theory runs up against its limits. Here be dragons. Here be the explanatory terminus. Here be brute facts.
A theory that goes beyond the Big Bang (and we have several candidates, starting with Inflation, which by now is almost as established as the Big Bang) would explain the Big Bang conditions, but it would in turn run up against its own limits.
You could also posit an explanatory terminus somewhere besides the structure and its boundary conditions. Indeed, you could posit the existence of observers as an explanation of some features of the structure and the boundary conditions, rather than the other way around, as in a traditional causal theory. This approach has an idealist ring to it: our own existence is the one thing we are most certain about, so why not put that at the foundation? I do not endorse this approach, but I think it has a right to exist.
Quoting Count Timothy von Icarus
I don't see how. Under the "null hypothesis" that the symbols are drawn randomly with a uniform probability from a pool that includes all and only the symbols that they have observed - yes. But what would be the reason to posit this null hypothesis? (Don't say "principle of indifference" - that's an epistemic technique, not a scientific or a metaphysical principle.) And yes, I am aware that in experimental science significance testing often resorts to positing a uniform distribution as the null hypothesis, but this practice is justly criticized.
I mean, I get the underlying intuition: you observe patterns, as opposed to chaos - you hypothesize a causal mechanism that would neatly explain your observations and integrate that explanation at a minimal cost into your noetic structure. That's how science generally works. OK, so how does this relate to the Past Hypothesis?
Quoting Count Timothy von Icarus
That's not really a question, is it? The answer is given in the premise.
Quoting Count Timothy von Icarus
You could hypothesize this, sure. A computational linguist could probably say more.
Quoting Count Timothy von Icarus
OK, can you say more - specifically, in relation to the Past Hypothesis? What is the meaning of probability in this context? What conclusions could we draw from it?
The initial state was unstable due to the structure of spacetime - that's how the theory goes. The universe was set to expand from the get-go. It is (assumed to be) causally closed, but the interesting thing about relativistic spacetime is that as it expands, its energy content increases. This is a point of disanalogy with the expanding vessel.
:up:
The relationship with the Past Hypothesis is that it is exceedingly combinatorically unlikely to have a low entropy universe. Borrowing Penrose's math, to observe a universe with our level of entropy is to observe a system that is occupying 1/10^10^123 of the entire volume in phase space (possible arrangements of the universe). It's like standing in a room full of coherent texts in the Library of Babel.
https://accelconf.web.cern.ch/e06/papers/thespa01.pdf
Or also relevant for the summary: https://arxiv.org/pdf/hep-th/0701146.pdf
As I said in the analogy, such a problem may very well be impossible to solve, but given we do have some leads on solving it in a more satisfactory way, the "brute fact" or "anthropic principal" punts seem unwarranted. Saying "we cannot ever know whence X, case closed," is different from saying "potentially, we may not be able to ever know."
I am not sure what criticisms to the Principle of Indifference you are referring too. The ones I have seen are arguments about model building and the need to implement Bayesian methods when there is not a case of total stochastic ignorance , which is not the case vis-á-vis the Past Hypothesis.
Some of these, and some more philosophical arguments are against a "general principle of indifference," e.g. where one [I] always[/I] starts with the principle when creating a model, if not in practice, since this is often not pragmatic, then at least as an epistemic principle. But this doesn't apply to our situation either.
I know of arguments for the wide interval view, i.e., that in cases of total stochastic ignorance, when we have no idea what the probabilities we expect to see might be, we should instead use the set of probability functions that assign every possible outcome p = 0 to 1, which is essentially forming a set combinatorically. Your credal state after seeing observations is then represented with a representer of functions collectively assigning every interval value to a "hypothesis of observing x;" however, I don't know how much that moves the needle here.
There are of course problems with indifference for some distributions, where contradictions popup, as when some outcomes are mutually exclusive or when the number of outcomes is infinite, but the first issue doesn't seem relevant and the second is dealt with by the fact that there appears to be finitely many distinguishable states of the universe.
If you encounter a phenomenon of which you have no previous knowledge, what are you supposed to do? Say probability has no say in the matter? Make it a brute fact?
I suppose. Certainly, it's been proposed to make the Past Hypothesis a "law," which cannot be subject to probability, to get around this issue (in a number of articles). I personally find that unsatisfying.
Of course, there may be other problems...
What a claim! I haven't read it to see if it stacks up.
I read the Penrose paper up to that oft-quoted 10^10^123 number and the discussion of gravitational degrees of freedom; I don't know enough to understand the rest. (The Banks paper is too advanced for me to follow.) I don't get Penrose's point. He is comparing the entropy of a region in the early universe with the entropy of a black hole with the same number of baryons. Why? How is this comparison relevant? The universe in the first few minutes after the Big Bang was a hot, dense plasma, so energetic as to preclude gravitational clustering. Lumping a gigantic black hole into the same macrostate makes no sense: that state was not accessible to the early universe (indeed, to the best of our knowledge, it is not a possible state of the universe at any time after the Big Bang). The same goes for the so-called gravitational degrees of freedom, which he infers prospectively from the later emergence of stars and galaxies. To treat those clumpy states, which only become available after the universe has cooled and expanded, as unused gravitational degrees of freedom in the early universe, one must treat the entire block universe as one timeless macrostate, which also makes no sense.
More to the point - and this is the question to which I keep returning - what is the meaning of whatever magic number that you calculate for the initial state of the universe? What does it mean to say that it was special or improbable? For it to be special there has to be some generic way for it to be. Whence the idea of the generic initial state? For it to be improbable there has to be a stochastic model of the initial state, which for the present purposes we do not consider.
By contrast, here is an example where such notions make good sense. The generic state for the air in your room is to be uniformly distributed throughout the room, as opposed to, for example, being condensed in one corner. That is easy to understand, given that in a stable environment air molecules have plenty of time to cycle through myriads of configurations. Only a tiny fraction of those configurations correspond to special states, which means that seeing those special sates is generically unlikely on human timescales. But that reasoning is inapplicable to the initial state. There is nothing generic about the initial state. It is unique. It has no history and no mechanism of formation.
Quoting Count Timothy von Icarus
It's not the criticisms of the PoI as such (although there are some, e.g. Norton, "Ignorance and Indifference") but of its thoughtless or even nefarious application. In hypothesis testing we are supposed to compare the predictions of the new and improved theory to the current consensus - the null hypothesis. Picking unrealistic distributions as representing the null hypothesis amounts to strawmaning.
In the case of the Past Hypothesis the situation is even worse, because there is nothing even to strawman. We do have stochastic ignorance if we bracket out theories that go beyond the established Big Bang theory, such as Inflation, since that is the context in which the question was originally posed.
Quoting Count Timothy von Icarus
It depends - see my previous reply.
Yes, entropy reaches its maximum at equilibrium. But the universe was never actually at equilibrium, because it was and still is quickly expanding. ("Quickly" has a technical meaning here: if it was expanding slowly enough - quasi-statically - then the maximum entropy would stay the same. Expansion as such does not increase entropy, but irreversible expansion does.) Still, if you consider very short timespans at which expansion can be neglected, then you could say that the early universe was indeed at its maximum entropy.
Quoting SophistiCat
Quoting ChatteringMonkey
Sorry, yes, I meant the matter encompassed by that space.
Quoting ChatteringMonkey
Yes, but could have been otherwise can mean many things. It could have been a bowl of petunias, for all we know! There is just no sensible way to ask a question in ignorance.
You probably know this already, but I learned some interesting stuff about entropy from this video:
Did you get a chance to look at the video?
Quoting SophistiCat
Ok. It explained what entropy has to do with what happened before the Big Bang. That part was cool.
Dr O'Dowd explained that the Big Bang was an entropy minimum that may have been an aspect of an entropy fluctuation. It may be that entropy was higher both "before" and "after" the Big Bang just because of the ongoing fluctuation. It would appear to us that time was going backward prior to the Big Bang.
That's not true. Watch the video.
If nothing can be said about likeliness vis-á-vis the early universe how do you vet any scientific theories about it? How can you say "this explanation is more likely to be the case than this one?"
Statements about the likelihood of entropy in the early universe are based on our knowledge of extremely high energy physics, extrapolations from the relevant mathematics, and our attempts to retrodict what we think happened using observations back to a certain point.
As you point out, it is now commonly accepted that a period of cosmic inflation preceded the Big Bang. Evidence for the existence of this period does make claims about what we are likely to see in the early universe. A major piece of evidence in favor of inflation is that patterns of light from the early universe are consistent with proposed inflation and unlikely under other existing models. Additionally, it is highly unlikely that the universe should have so little curvature sans some sort of explanation, that the temperature should be so uniform, etc. These aren't chalked up to brute facts.
By your logic, this should not be considered evidence of inflation because the likelihood or unlikelihood of seeing any pattern in light, temperature, etc. from the earliest observable moments of the universe is impossible to determine.
Aiming telescopes at the sky, and then running the data through statistical analyses to determine if observations are so unlikely under X theory as to cause us to question X should break down as a technique when looking back far enough.
But then why accept inflation either? Arguments for it also put forth in terms of confidence intervals and margins of error vis-á-vis conditions in the early universe. You even have explanations of the low entropy of the universe that are based in inflation, but this is something that isn't supposed to be explainable in terms of anything, at least in the sense that we can't say how likely it is for X to cause Y if the probability of Y is unknowable.
This seems as much of a mixup to me as when people claim "nothing can come before the Big Bang because time and cause are meaningless past that point." This gets repeated despite the fact that inflation has come to be popularly accepted as a theory of what came prior to and caused the Big Bang. I wouldn't be shocked if some step prior to inflation appears some day; it's not a particularly venerable theory. (Of course, "Big Bang," is now sometimes used to refer to inflation too, rather than the actual "Bang." However, saying nothing can come before some "begining point," while said point can be arbitrarily redefined so as to remain the beginning is trivial.)
Sure, but this is speculative. It implies that you can get the "Big Bang," under highly different conditions. Perhaps one only gets a Big Bang under such conditions that create global asymmetries similar to what we observer, or perhaps some hitherto unknown aspect of the Big Bang makes such asymmetry inevitable, part and parcel of said Bang.
The problem with toy universes is that we often find out some sort of basic underlying assumption we've used to create them turns out to be flawed, and thus what they tell us about the world doesn't end up squaring up with reality (e.g. toy models of Maxwell's Demon prior to innovations in information theory).
Thermodynamics isn't the only global asymmetry either. There is wave asymmetry in electromagnetism, the jury is out on of this reduces to the thermodynamic arrow; there is radiation asymmetry, etc.
It doesn't seem like thermodynamics can be exactly what we mean by time because if the thermodynamic arrow were to reverse, it doesn't seem like it would throw time in reverse. Some areas of the universe would hit equilibrium before others in a universe with a Big Bang and Big Crunch. If time reversed when the thermodynamic arrow reversed, we should expect that, when the very last area of the universe that is out of equilibrium and not contracting reaches equilibrium, particles should suddenly have their momentum reverse and begin backtracking. However, there is no mechanism for this to occur at a micro level.
Indeed, we can well imagine sticking an observer in a tank with a Maxwell's Demon and having them watch the isolated system they sit in reduce in entropy over time. Global entropy would reduce, but that says nothing about the observer subsystem and how it experiences time.
Not to mention there is an overarching microlevel problem. Observed wavefunction collapse only happens in one direction. This is a fundemental level asymmetry that is probably the most vetted empirical results in the sciences. To be sure, there are theories that identify this asymmetry as merely apparent, but it's a big phenomenon to explain a way by preferencing one set of theories over another when there is no empirical evidence to differentiate them.
The same way we vet all other theories? All theories have some brute facts, some givens in them: equations, constants, boundary conditions. There is nothing special about the initial conditions of the Big Bang theory in that respect.
Quoting Count Timothy von Icarus
Yes, but you are now talking about a different theory, or rather an extension of the Big Bang theory. What is taken to be the initial state in Big Bang is no longer the initial state in Inflation. Inflation leads up to the hot Big Bang, thus giving it a causal explanation, but raising questions of its own. (I think the Banks paper deals with those questions, but like I said, I cannot follow it.)
Quoting Count Timothy von Icarus
Yes, and that (the bolded parts) is just like the Big Bang theory, and every other scientific theory, gets its justification.
But you said it yourself: likely or unlikely under some model. The Big Bang theory doesn't model its own initial conditions - how could it? So to talk about the likelihood of Big Bang's initial state by converting dubious entropy into bogus probability makes little sense. You can talk about Big Bang's initial state in the context of the Inflation theory, and if you find that it predicts those conditions with a low probability, then that's a problem for Inflation.
Quoting Count Timothy von Icarus
If there is a mixup here, it is a mixup of terminology. When people say Big Bang, they can mean t = 0 in the Big Bang chronology (the theoretical singularity in the classical relativistic model on which Big Bang theory is based), or they can mean the earliest period where the Big Bang theory is applicable, which comes a little bit after t = 0 (and which would be preceded by Inflation), or they can even mean the entire period from there till now and beyond (the Big Bang universe). The worry about time ending or becoming physically meaningless as it approaches t = 0 is not unfounded, for although we know little about that earliest period, there is reason to think that physical clocks that give time its meaning beyond a mathematical formalism may no longer work there.
Quoting SophistiCat
Quoting Count Timothy von Icarus
And that's my point. When you ask why something is this way and not the other way - for example, why the Big Bang universe has a time asymmetry - the implication is that it could have been otherwise. That is obviously problematic with things like laws, constants and boundary conditions, unless we already have a reductive theory in mind. If that is not available, then all we can do is imagine an alternative world. We can't say anything more, and we can't attach probabilities to these imaginary alternatives, because that would imply that we know more than we actually do.
I don't have time now to discuss thermodynamics and the arrow of time, but I'll try to get back to this later.
Historically, the line of reasoning has gone in the opposite direction. One of the most compelling arguments for the Big Bang was that, in an eternal universe of the sort people thought existed in the late 19th and early 20th centuries, the conditions we observed in the universe seemed highly unlikely based on statistical mechanics. That is, we accept such a starting point for observable existence, in part, because of arguments about the likelihood of entropy levels in the first place. An eternal universe could produce such phenomena, it just is unlikely too.
(...or maybe it isn't unlikely... maybe such a universe invariably produces many Big Bangs: https://www.nytimes.com/2008/01/15/science/15brain.html ...leaving us once again with the Boltzmann Brain problem. Although I personally think this whole problem emerges from a bad use of probability theory, using methods that require i.i.d when it doesn't exist, but that's another issue.)
I'm still not quite sure what your objection was because my original point was that claiming that there is no reason to think the universe would have low entropy (agreeing that it appears to be unlikely), and then invoking the anthropic principle to fix that issue, reduced explanations to the triviality that all possible things happen and so whatever is observed MUST occur. If you don't think the Past Hypothesis or Fine Tuning Problem needs an answer then there is no reason to invoke the Anthropic Principle in the first place. Your point that we could assume that the universe necessarily had to have something like the entropy it does in fact appear to have is what I was arguing for.
As an aside, if you mean the retarded/advanced wave asymmetry, @Kenosha Kid had a thread here about Cramer's Transactional Interpretation of quantum mechanics, which eliminates this asymmetry.
Quoting Count Timothy von Icarus
The usual formulations of quantum mechanics are indeed time-asymmetric, but QM can be equivalently formulated in a time-neutral manner, so that any time asymmetry is a matter of interpretation.
Quoting Count Timothy von Icarus
I never said that thermodynamics is what we mean by time; rather, what we mean by phenomenal time asymmetry (the so-called arrow of time) is explained by the thermodynamic asymmetry on a large scale, which in turn is explained by the asymmetry of boundary conditions. The forward direction of time tracks the direction of increasing entropy in our observable universe. But I am not sure what you mean by the thermodynamic arrow reversing.
Quoting Count Timothy von Icarus
You lost me here.
Quoting Count Timothy von Icarus
If your region of the universe that undergoes a uniform thermodynamic evolution is large enough, you won't notice anything, because your perception of time will track the direction of increasing entropy. Talking about biological arrow of time, you will always remember entropy being lower in the "past," and you will not remember the "future."
The Big Bang theory grew out of General Relativity, which allowed it as one of its general solutions, and astronomical observations that made it increasingly likely, until there was little room for alternatives. That this also solved the problem of entropy was a welcome bonus. But the plausibility of the theory was not evaluated by entropy calculations of the kind that are performed to justify the thesis about the improbability of the initial state.
Before GR and Big Bang, in the Newtonian cosmology of the day, the most natural scientific picture of the world was that of an infinite, past-eternal universe. (Not because of some anti-religious bias, as some claim, but simply because it's hard to justify or even imagine a Newtonian universe whose timeline abruptly ends for no apparent reason some finite time in the past.) But that presented a challenge as people came to realize that many, if not all things in this world could not be past-eternal. The Sun, for example, which was thought to be a burning ball, could not have an infinite supply of fuel. Same for all the other stars in the sky. Who was lighting them up for all eternity?
The development of thermodynamics posed that problem especially acutely. Clearly, in a closed universe that is not in thermodynamic equilibrium entropy must have had a minimum a finite time in the past. This is where Boltzmann came up with the idea of a gigantic entropy fluctuation giving rise to the observable universe. In an infinite, eternal universe such a fluctuation would be almost inevitable, so its improbability was not an issue as such. However, brilliant man that he was, Boltzmann also thought of a worrying problem with this conjecture: the one we now know as the Boltzmann Brain. And even now, as you noted further on, as updated versions of Boltzmann fluctuation conjecture are being proposed to explain the origin of Big Bang, that problem still keeps cosmologists awake at night.
Anyway, all this is irrelevant to the question of the "probability" of the true initial state of the Big Bang universe, i.e. a state that comes with no known history and no theory of its origin. Boltzmann's entropy fluctuation was posited not as an initial state (that would be a contradiction of terms) but as an event in a Newtonian universe. In that context the calculation of probability can be meaningfully performed. Not so for a true initial state.
Quoting Count Timothy von Icarus
Anthropic reasoning and fine tuning worries arise in the context of the origins of the universe, when theories such as Eternal Inflation are discussed. Such theories must explain the Past Hypothesis as a matter of course (or they would not match observations), but they raise other questions. Absent a theory though, worrying about the "specialness" or "improbability" of the initial state makes no sense, in my view.
When you think of the Big Bang, you just mean inflation, right? You're not adding a singularity to it, are you?
The problem with putting initial conditions off limits is that virtually everything we observe in the universe is dependant on initial conditions. That is, of the set of all physically possible things we could see, we shouldn't expect to see one universe more than the other. Thus, if we come to see "Christ is King," "Zeus wuz here," "Led Zeppelin rules!," scrawled out in quasars and galaxies at the far end of the cosmos, this shouldn't raise an eyebrow? Because, provided the universe is deterministic, such an ordering would be fully determined by those inscrutable initial conditions.
Just returning to this: the problem I see here is that assuming "The Big Bang" = T0 is begging the question (not saying you are doing this, just pointing out that this is often done). The Big Bang is a specific scientific theory about the origins of the universe, one which did not initially include, and which does not require Cosmic Inflation. Time does seem to break down at T0, provided T0 exists. However, this does not entail that time breaks down at the Big Bang because the two are not necessarily identical.
I agree that common usage is to call Cosmic Inflation and the Big Bang by the same name, but quite a lot of in-depth treatments of the topic refer to Cosmic Inflation as occuring before/being the cause of the Big Bang, and this is certainly how it was described when the theory was just a hypothesis.
My only point is that we have already taken one giant causal step back from the Big Bang. There are several theories that propose that we might take yet another step back (e.g. Black Hole Cosmology, or the idea that another Big Bang could spring to life in our universe at some point in the far distant future, aeons after it reaches thermodynamic equilibrium).
And if we expand our limits past the "initial conditions", we're forced to once again address the age old problem of the infinite regress vs the uncaused cause.
Quoting Count Timothy von Icarus
As your examples imply, if we were to treat such patterns as evidence - rather than just shrug - the only answer that suggests itself is to posit a deliberate creation. But then we open ourselves to the immediate counterargument that this just shifts back the "initial conditions" back to the conditions of the creator.
It seems to me we're caught between a rock and a hard place here.
Historically, the so-called Big Bang theory came first, and theor(ies) of inflation were developed later, around 1980s. Inflation pushes the Big Bang chronology a little further back, introduces some new theoretical posits, but in return it rather neatly explains some of the later features of the early universe.
As I was saying earlier, the informal name Big Bang is variously attached to different theories, periods and events. Sometimes it is even used to refer to the entire cosmological timeline, going back to time zero (which historically has been called "Big Bang singularity," although few believe in an actual singularity.)
In the context of the Past Hypothesis, again for historical reasons, we can take the initial state to be after the hypothetical inflation, perhaps somewhere around the beginning of nucleothynthesis, when hydrogen and helium ions formed. The precise cutoff is not important, because the same considerations can be extended to earlier periods.
The difficulty of applying 19th century equilibrium thermodynamics to the early universe becomes even more severe in earlier periods, however, because they are extremely brief and thermodynamically unstable.
If that is how the theory is structured, yes. But that's a feature, not a bug. You could alternatively explain initial conditions in terms of later features - that is essentially what anthropic explanations do.
Quoting Count Timothy von Icarus
I am not sure what this fantastical hypothetical is supposed to argue.
Seeing text written in English using galaxies wouldn't undercut the Copernican Principal? I mean, the universe would be writing in human language on the largest scales we can observe... at that point, if you keep the principal it has become dogma, something religious that can't be overturned by new observations.
Likewise, if we uncovered some sort of ancient Egyptian code in our DNA that said something like "we came from other stars to give you intelligence, send some light beams at these points when you read this," I would certainly start to take the History Channel loons more seriously, rather than shrug.
The likelihood of such a code is such that it would be solid evidence for ET conspiracies IMO. But if both potential causes of the message, random fluke and alien intervention, are both entirely dependant on initial conditions and their deterministic evolution, then why would we assign more likelihood to one versus the other?
But this equal likelihood assignment is prima facie unreasonable and isn't how science operates in the first place.
Now if the universe isn't deterministic, then I don't know why what we know about the stochastic nature of the universe doesn't start applying to likelihoods off the bat, at T0 +1, in which case some aspects of the earliest observable positions of the universe can be more or less likely.
It's supposed to argue that, were we to see something like a message in English written in block letters using galaxies as pixels, I doubt anyone would seriously say 'welp, that's consistent with the laws of physics and could have been caused by initial conditions. We can't say anything about this observation being surprising.'
That seems like a bug not a feature. It reduces explanations of all phenomena consistent with fundemental physics to the inscrutable origins of being.
Not that people haven't made this argument. There are arguments for "law-like," initial conditions, as in "like physical laws, they are brite facts." But these normally posit some sort of loose boundary condition on initial conditions, along the lines of "it is a brute fact that the universe must start with low entropy."
But to claim that all initial conditions are completely immune to statistical analyses is to say that, if the universe is rigidly deterministic (a popular view), then all observations are necessary due to initial conditions; they occur with P = 1.
I don't see how this can't reduce debates about topics like the likelihood of life coming to exist in the universe to triviality. That is, the presence of "organic" compounds in asteroids, the number of potential DNA codons, the possible combinations of amino acids, the likelihood of RNA overcoming error rate problems in x hundred million years, etc. seems to become nonsense. If life occured, it is because of the initial conditions of the universe specify it with P = 1. So, if life only occurs on Earth, if life spreads out from Earth for another 10 billion years, moving on to other galaxies, never finding another origin point, our descendants should still say that there is nothing surprising about life's starting on just one planet out of trillions? But then this is not consistent with the frequentist perspective since the frequency of planets generating life is indeed vanishingly low.
The problems I see here are:
1. How does one ground probabilities when everything in the universe happens with probability = 1? A subjective approach maybe?
2. If the universe isn't rigidly deterministic, if it is stochastic, like it appears to be observationally, then the entropy of the early universe is going to be dependent on stochastic quantum fluctuations existing at the beginning of the universe. We don't have a great idea how physics works at these energy levels, but we can certainly extrapolate from high energy experiments and the underlying mathematics, in which case it seems we can say something about the likelihood of the earliest observable conditions.
3. I still don't see how you're able to use statistical analysis in early universe cosmology to support or reject any theories in this case. If someone says "look, this background pattern is extremely unlikely unless the value of the weak force changed in such and such a way, a change that seems possible due to experimental data from CERN," why isn't the response "but other things could cause that same pattern and we can say nothing about the likelihood of such a pattern emerging since it is dependent on initial conditions."
Also, I don't see why observing seemingly unlikely phenomena requires positing any sort of creator or designer. Why can't we just assume some sort of hitherto unforseen mechanism that makes the seemingly unlikely, likely? E.g., people used to think the complexity of life required a creator, but then the mechanisms underpinning evolution were discovered.
To me, the Copernican Principle is just that - a principle. It's not a physical law that's based on observation. It's more a method of inquiry.
The purpose of the principle is to prevent a kind of empirical special pleading, by insisting that we should avoid invoking some unexplained special local condition when explaining phenomena.
Quoting Count Timothy von Icarus
I'm not following your example here. What you describe is a standard case of evidence for a particular theory - in this case that human life was seeded by aliens. There's nothing here that poses any problem for the scientific method.
The problems only start when we contemplate evidence for the source of all possible observations (the "initial conditions").
The scientific method is based on the premise that observation is the arbiter of truth. But that requires that you can differentiate between inputs based on the output. In other words it must be possible to construct a theory that predicts only some observations but not others.
That does not seem to be possible when we contemplate the "initial conditions". Whatever they are, they must always explain all possible observations. It is therefore in principle impossible to assess these using the scientific method.
Quoting Count Timothy von Icarus
Talking about "intial conditions" seems to rule out any mechanism since, by definition, the conditions must be present before any mechanism has operated.
The initial conditions cannot be contingent, else whatever they're contingent on is actually the initial conditions.
Sure, but how do you ever know what are actually initial conditions and what just appear to be the earliest initial conditions you can make out? This is why I think it is confusing to have turned "the Big Bang" into "whatever moment is T0," i.e., making it tautologically equivalent to T0, such that inflation is now just "part of the Big Bang." If internal inflation is the case, which is not an unpopular opinion, there is no T0 in the first place. Such a move requires you to assume a T0.
The initial Big Bang theory left several things unexplained. I don't get how you can accept inflation and claim initial conditions are unanalyzable consistently. Alan Guth proposed inflation because CMB uniformity and black body radiation seemed incredibly unlikely without some sort of mechanism at work.
https://www.forbes.com/sites/startswithabang/2019/10/22/what-came-first-inflation-or-the-big-bang/?sh=508320894153
The point about the aliens is just this: if everything observable evolves deterministically from initial conditions, and initial conditions are unanalyzable, then the likelihood of any observation has no objective value. You could still ground probability in subjective terms, which is perhaps justified anyhow for other reasons, but it seems like a bad way to arrive at such a conclusion. The thing that has always kept me from embracing a fully subjective approach to probability is: (1) the existence of abstract propensities that seem isomorphic to physical systems and; (2) that fully subjective probability makes information theory arguably incoherent, which is not great since it is a single theory that is able to unify physics, biology, economics, etc. and provides a reasonable quantitative explanation for how we comes to know about the external world via sensory organs (leaving aside the Hard Problem there).
If everything doesn't evolve deterministically from initial conditions, then we can say some observations of the early universe are more likely than others by generalizing our knowledge of QM back to the very earliest moments of the universe (which are only observable indirectly in how later epochs look). That is, we can use our current knowledge to say how clumpy the universe should be based on our understanding of the fluctuations that ostensibly caused those clumps. And if the earliest observable parts of the universe are subject to those probabilities, then the Past Hypothesis is still fair game.
I can't really comment on the merits of theories on the early universe from a physics perspective. I don't have the requisite background.
But in terms of general principle, I don't think there's any method to determine the "real" initial conditions empirically. Nor do I see any other source of this information. In general, it's not possible to assess whether your information about speculative questions is complete.
To avoid a misunderstanding: I'm not saying nothing can be known about initial conditions. There's a difference though between asking what the conditions might have been, and why they are.
Siegel's conclusion here strikes me as rather ridiculous. Science, in the strict sense, is a specific method for specific kind of question. There is no reason to assume every question must have an answer.
It is also somewhat ironic, because in practice, option one is really just a way to sidestep dealing with option two. If there is an event horizon that causality can pass through in one direction, but cannot be followed through backwards, then that neatly rescues determinism from the paradox of the first cause. The problem is that while inflation may scramble information about the initial conditions so as to make the unreadable, under a deterministic framework it cannot destroy that information. So all we do is draw a veil over the initial conditions and declare them inaccessible, but that does not make them disappear.
Quoting Count Timothy von Icarus
I don't think the conclusion is necessary. I think you're getting trapped here in insisting on the deterministic connection, as this will inevitably reduce all events to an inscrutable ballet of energies and forces. When we make sense of the world, we construct models, and within these models probabilities are still objective in the sense you're using it.
And