I’ve just uploaded to the arXiv the paper “Finite time blowup for an averaged three-dimensional Navier-Stokes equation“, submitted to J. Amer. Math. Soc.. The main purpose of this paper is to formalise the “supercriticality barrier” for the global regularity problem for the Navier-Stokes equation, which roughly speaking asserts that it is not possible to establish global regularity by any “abstract” approach which only uses upper bound function space estimates on the nonlinear part of the equation, combined with the energy identity. This is done by constructing a modification of the Navier-Stokes equations with a nonlinearity that obeys essentially all of the function space estimates that the true Navier-Stokes nonlinearity does, and which also obeys the energy identity, but for which one can construct solutions that blow up in finite time. Results of this type had been previously established by Montgomery-Smith, Gallagher-Paicu, and Li-Sinai for variants of the Navier-Stokes equation without the energy identity, and by Katz-Pavlovic and by Cheskidov for dyadic analogues of the Navier-Stokes equations in five and higher dimensions that obeyed the energy identity (see also the work of Plechac and Sverak and of Hou and Lei that also suggest blowup for other Navier-Stokes type models obeying the energy identity in five and higher dimensions), but to my knowledge this is the first blowup result for a Navier-Stokes type equation in three dimensions that also obeys the energy identity. Intriguingly, the method of proof in fact hints at a possible route to establishing blowup for the true Navier-Stokes equations, which I am now increasingly inclined to believe is the case (albeit for a very small set of initial data).
To state the results more precisely, recall that the Navier-Stokes equations can be written in the form
for a divergence-free velocity field and a pressure field
, where
is the viscosity, which we will normalise to be one. We will work in the non-periodic setting, so the spatial domain is
, and for sake of exposition I will not discuss matters of regularity or decay of the solution (but we will always be working with strong notions of solution here rather than weak ones). Applying the Leray projection
to divergence-free vector fields to this equation, we can eliminate the pressure, and obtain an evolution equation
is a certain bilinear operator on divergence-free vector fields (specifically,
. The global regularity problem for Navier-Stokes is then equivalent to the global regularity problem for the evolution equation (1).
An important feature of the bilinear operator appearing in (1) is the cancellation law
(using the inner product on divergence-free vector fields), which leads in particular to the fundamental energy identity
This identity (and its consequences) provide essentially the only known a priori bound on solutions to the Navier-Stokes equations from large data and arbitrary times. Unfortunately, as discussed in this previous post, the quantities controlled by the energy identity are supercritical with respect to scaling, which is the fundamental obstacle that has defeated all attempts to solve the global regularity problem for Navier-Stokes without any additional assumptions on the data or solution (e.g. perturbative hypotheses, or a priori control on a critical norm such as the norm).
Our main result is then (slightly informally stated) as follows
Theorem 1 There exists an averaged version
of the bilinear operator
, of the form
for some probability space
, some spatial rotation operators
for
, and some Fourier multipliers
of order
, for which one still has the cancellation law
and for which the averaged Navier-Stokes equation
admits solutions that blow up in finite time.
(There are some integrability conditions on the Fourier multipliers required in the above theorem in order for the conclusion to be non-trivial, but I am omitting them here for sake of exposition.)
Because spatial rotations and Fourier multipliers of order are bounded on most function spaces,
automatically obeys almost all of the upper bound estimates that
does. Thus, this theorem blocks any attempt to prove global regularity for the true Navier-Stokes equations which relies purely on the energy identity and on upper bound estimates for the nonlinearity; one must use some additional structure of the nonlinear operator
which is not shared by an averaged version
. Such additional structure certainly exists – for instance, the Navier-Stokes equation has a vorticity formulation involving only differential operators rather than pseudodifferential ones, whereas a general equation of the form (2) does not. However, “abstract” approaches to global regularity generally do not exploit such structure, and thus cannot be used to affirmatively answer the Navier-Stokes problem.
It turns out that the particular averaged bilinear operator that we will use will be a finite linear combination of local cascade operators, which take the form
where is a small parameter,
are Schwartz vector fields whose Fourier transform is supported on an annulus, and
is an
-rescaled version of
(basically a “wavelet” of wavelength about
centred at the origin). Such operators were essentially introduced by Katz and Pavlovic as dyadic models for
; they have the essentially the same scaling property as
(except that one can only scale along powers of
, rather than over all positive reals), and in fact they can be expressed as an average of
in the sense of the above theorem, as can be shown after a somewhat tedious amount of Fourier-analytic symbol manipulations.
If we consider nonlinearities which are a finite linear combination of local cascade operators, then the equation (2) more or less collapses to a system of ODE in certain “wavelet coefficients” of
. The precise ODE that shows up depends on what precise combination of local cascade operators one is using. Katz and Pavlovic essentially considered a single cascade operator together with its “adjoint” (needed to preserve the energy identity), and arrived (more or less) at the system of ODE
are scalar fields for each integer
. (Actually, Katz-Pavlovic worked with a technical variant of this particular equation, but the differences are not so important for this current discussion.) Note that the quadratic terms on the RHS carry a higher exponent of
than the dissipation term; this reflects the supercritical nature of this evolution (the energy
is monotone decreasing in this flow, so the natural size of
given the control on the energy is
). There is a slight technical issue with the dissipation if one wishes to embed (3) into an equation of the form (2), but it is minor and I will not discuss it further here.
In principle, if the mode has size comparable to
at some time
, then energy should flow from
to
at a rate comparable to
, so that by time
or so, most of the energy of
should have drained into the
mode (with hardly any energy dissipated). Since the series
is summable, this suggests finite time blowup for this ODE as the energy races ever more quickly to higher and higher modes. Such a scenario was indeed established by Katz and Pavlovic (and refined by Cheskidov) if the dissipation strength
was weakened somewhat (the exponent
has to be lowered to be less than
). As mentioned above, this is enough to give a version of Theorem 1 in five and higher dimensions.
On the other hand, it was shown a few years ago by Barbato, Morandin, and Romito that (3) in fact admits global smooth solutions (at least in the dyadic case , and assuming non-negative initial data). Roughly speaking, the problem is that as energy is being transferred from
to
, energy is also simultaneously being transferred from
to
, and as such the solution races off to higher modes a bit too prematurely, without absorbing all of the energy from lower modes. This weakens the strength of the blowup to the point where the moderately strong dissipation in (3) is enough to kill the high frequency cascade before a true singularity occurs. Because of this, the original Katz-Pavlovic model cannot quite be used to establish Theorem 1 in three dimensions. (Actually, the original Katz-Pavlovic model had some additional dispersive features which allowed for another proof of global smooth solutions, which is an unpublished result of Nazarov.)
To get around this, I had to “engineer” an ODE system with similar features to (3) (namely, a quadratic nonlinearity, a monotone total energy, and the indicated exponents of for both the dissipation term and the quadratic terms), but for which the cascade of energy from scale
to scale
was not interrupted by the cascade of energy from scale
to scale
. To do this, I needed to insert a delay in the cascade process (so that after energy was dumped into scale
, it would take some time before the energy would start to transfer to scale
), but the process also needed to be abrupt (once the process of energy transfer started, it needed to conclude very quickly, before the delayed transfer for the next scale kicked in). It turned out that one could build a “quadratic circuit” out of some basic “quadratic gates” (analogous to how an electrical circuit could be built out of basic gates such as amplifiers or resistors) that achieved this task, leading to an ODE system essentially of the form
where is a suitable large parameter and
is a suitable small parameter (much smaller than
). To visualise the dynamics of such a system, I found it useful to describe this system graphically by a “circuit diagram” that is analogous (but not identical) to the circuit diagrams arising in electrical engineering:
The coupling constants here range widely from being very large to very small; in practice, this makes the and
modes absorb very little energy, but exert a sizeable influence on the remaining modes. If a lot of energy is suddenly dumped into
, what happens next is roughly as follows: for a moderate period of time, nothing much happens other than a trickle of energy into
, which in turn causes a rapid exponential growth of
(from a very low base). After this delay,
suddenly crosses a certain threshold, at which point it causes
and
to exchange energy back and forth with extreme speed. The energy from
then rapidly drains into
, and the process begins again (with a slight loss in energy due to the dissipation). If one plots the total energy
as a function of time, it looks schematically like this:
As in the previous heuristic discussion, the time between cascades from one frequency scale to the next decay exponentially, leading to blowup at some finite time . (One could describe the dynamics here as being similar to the famous “lighting the beacons” scene in the Lord of the Rings movies, except that (a) as each beacon gets ignited, the previous one is extinguished, as per the energy identity; (b) the time between beacon lightings decrease exponentially; and (c) there is no soundtrack.)
There is a real (but remote) possibility that this sort of construction can be adapted to the true Navier-Stokes equations. The basic blowup mechanism in the averaged equation is that of a von Neumann machine, or more precisely a construct (built within the laws of the inviscid evolution ) that, after some time delay, manages to suddenly create a replica of itself at a finer scale (and to largely erase its original instantiation in the process). In principle, such a von Neumann machine could also be built out of the laws of the inviscid form of the Navier-Stokes equations (i.e. the Euler equations). In physical terms, one would have to build the machine purely out of an ideal fluid (i.e. an inviscid incompressible fluid). If one could somehow create enough “logic gates” out of ideal fluid, one could presumably build a sort of “fluid computer”, at which point the task of building a von Neumann machine appears to reduce to a software engineering exercise rather than a PDE problem (providing that the gates are suitably stable with respect to perturbations, but (as with actual computers) this can presumably be done by converting the analog signals of fluid mechanics into a more error-resistant digital form). The key thing missing in this program (in both senses of the word) to establish blowup for Navier-Stokes is to construct the logic gates within the laws of ideal fluids. (Compare with the situation for cellular automata such as Conway’s “Game of Life“, in which Turing complete computers, universal constructors, and replicators have all been built within the laws of that game.)
89 comments
Comments feed for this article
4 February, 2014 at 2:30 pm
Mitzpe Ramon
This seems to be the first post on the Navier-Stokes problem where you do *not* say from the beginning that you do not claim a substantial progress. So would you say that this is a big step forward towards a solution of the problem? Maybe the biggest step in the last decades, because you found a program which is not unrealistic?
6 February, 2014 at 3:44 pm
Liam
Of course, other people have to decide this.
4 February, 2014 at 2:52 pm
E.L. Wisty
Reblogged this on Pink Iguana and commented:
Tao’s Navier-Stokes paper
5 February, 2014 at 8:19 am
Anonymous
As a first-year undergraduate, I am curious as to what this actually means. Given the timing of this and the statement “…using a blowup solution to a certain averaged version of the Navier-Stokes equation to demonstrate that any proposed positive solution to the regularity problem which does not use the finer structure of the nonlinearity cannot possibly successful.”, would it be safe to read this as “Otelbaev’s claimed proof is wrong.”?
5 February, 2014 at 10:35 am
samuelfhopkins
Of course it’s always possible that both proofs are correct and mathematics is inconsistent. :)
5 February, 2014 at 1:06 pm
jussilindgren
Dear Terry, what do you think of this simple argument:
The full Navier-Stokes equations can be stated as
with the usual incompressibility condition
We transform the equation in a more useful form using the following vector calculus identity
Substituting this back into the Navier-Stokes equation one has
Operating with the curl operator, one gets the vorticity equation
In order to ultimately obtain the enstrophy equation, one needs to dot the Navier-Stokes equation with vorticity:
Now we use the following vector calculus identity, which is the key in this rather short proof of regularity :
Let us make then the following identification:
and
We then have
We substitute this expression back to the enstrophy equation to get:
The only problematic term now is the second one on the right side of the equation. Let us consider it more closely:
We can write it as
Where we have introduced the matrix differential operator
It is important to note that this representation matrix is skew-symmetric.
Now we know that a scalar is invariant under transposition, so we have the equality
From skew-symmetry of the operator matrix we then have
So that finally we have the rather surprising equality
From the enstrophy equation we can solve for this!
Substituting back this to the latter version of the enstrophy equation, one gets
Now if integrate over the whole space
and keep in mind that the velocity field decays sufficiently fast at infinity, by direct use of divergence theorem the divergence term on the right hand side is killed by the space-integral, and we get that
It is well known that if the total enstrophy stays bounded, the solutions stay regular. Note that in particular for the Euler equations, total enstrophy is conserved. QED. Best, Jussi Lindgren
9 March, 2014 at 12:59 am
Anonymous
The proof of this dissipation law is clearly erroneous, but it does not imply the claim is false. Do there exist examples of suitably regular solutions, say, in H^1([0,T],H^1(R^3,R^3)), that violate this inequality? (I know that there are solutions of infinite total energy that produce singularities.)
25 April, 2014 at 11:25 pm
Anonymous
All of the vector calculus stuff looks fine, at least.
5 February, 2014 at 1:28 pm
jussilindgren
you can comment at my blog at http://navierstokesregularity.wordpress.com
5 February, 2014 at 3:37 pm
Vlado Vrhovski
http://www.newscientist.com/article/dn24915-kazakh-mathematician-may-have-solved-1-million-puzzle.html#.UvLKT7TvnIU
6 February, 2014 at 1:23 am
friend48
Jussi wrote
Now we know that a scalar is invariant under transposition, so we have the equality
(\mathbf{u}\times \mathbf{\omega})^T \mathbf{R}\mathbf{\omega}=\mathbf{\omega}^T \mathbf{R}^T (\mathbf{u}\times \mathbf{\omega})
This would be correct if R were a usual matrix. However with an operator matrix this is wrong. Just try the product of 2 scalar functions with a differential operator in between. Then the formula ovbiously fails.
Another issue.
Prof. Tao’s paper does not formally kill Otelbaev’s one. O., while considering the periodic problem, has the linear part, the Laplacian on the torus, i.e., the operator with discrete sectrum. In Prof. Tao’s realization of his abstract setting, the linear part is the Laplacian in the whole space, i.e., the operator with the whole spectrum continuouos. Moreover, Otelbaev’s ‘proof’ is heavily based upon the lowest point in the spectrum being isolated.
6 February, 2014 at 5:01 am
Anonymous
Otelbaev has implicitly used BOTH integral condition (1.4) and the periodic boundary conditions (1.3); they are assumed in deriving (4.5) from (4.4). These conditions are extra to the Clay formulation in periodic domains. Hence he is working on an over-determined system of pde. The (subtle) point is that we are constrained in applying the Helmholtz-Leray projection; any attempt to invert the pressure Poisson equation must be fully justified. Otelbaev did not derive any apriori bound on the “RHS” function of the Poisson equation. At most, his “proof” is formal and is not for the Clay NS problems.
6 February, 2014 at 9:13 am
Arie Israel
Dear Terry, Thanks for the incredible blog post. This has already become a personal favorite of mine. The idea of using intuition from electrical engineering to help construct blow-up solutions really struck me as profound. As a non-expert on fluid mechanics, I have a very simple question: Do you believe that one can make any formal equivalence or connection between regularity for Navier-Stokes and regularity for averaged Navier-Stokes. e.g., for given fixed initial data, is the solution to the averaged version of Navier-Stokes _more_ regular than the solution to true Navier-Stokes (with the same initial data)?
7 February, 2014 at 4:31 pm
Terence Tao
For short time (local) theory, in which the evolution is close to linear, one expects any local existence theory for the non-averaged NS equation to carry over to the averaged NS equation, basically because local existence theory is based on linear or multilinear estimates, which behave well with respect to averaging (Minkowski’s inequality). But for long time theory the two could be quite different. In particular the NS equation has the vorticity equation formulation (with all the attendant phenomena such as vortex stretching), which the averaged one does not. There is some work of Hou and Lei, http://www.ams.org/mathscinet-getitem?mr=2492706, that I recently learned about that suggests that the true NS equations may have some stabilising effects in their nonlinearity that make their behaviour better (at least for typical data) than other NS-like equations. This suggests there are going to be real challenges in transferring the blowup results from the averaged NS setting to the true NS setting, but they do not seem to be completely insurmountable. (Even if true NS behaves better than averaged NS for “most” data, one just needs a very special set of initial data in which the true NS can exhibit the “fluid logic gate” behaviour that the averaged NS equation enjoys by design, in order to (in principle, at least) replicate the blowup scenario.)
6 February, 2014 at 10:51 am
gowers
I have a variant of Mitzpe Ramon’s question, but would understand if you didn’t want to answer it. You say that your proof hints at a way of establishing blow-up for the true Navier-Stokes equation. Would it be correct to deduce from your willingness to go public with this that there are some clearly identifiable serious difficulties in turning that hint into a proof? Given your speed at doing mathematics, the deduction is not all that convincing, but I’m still curious, as a total non-expert, to know whether there’s any prospect of seeing another Clay problem fall in the next few years.
6 February, 2014 at 11:11 am
Felipe Voloch
@gowers re: Clay problem falls. My understanding, from asking Jaffe a question many years ago, is that there is no Clay prize for disproving any of the conjectures featured in the prizes, except perhaps PvsNP.
6 February, 2014 at 11:26 am
comment
“To give reasonable lee-way to solvers while retaining the heart of the problem, we ask for a proof of one of the following four statements. [...] (C) Breakdown of Navier–Stokes solutions on
”
http://www.claymath.org/sites/default/files/navierstokes.pdf
6 February, 2014 at 11:34 am
Sniffnoy
Not so; see the rules. For P vs NP and the Navier-Stokes problems, either a proof or counterexample will get the full prize. (And for Navier-Stokes, doing it for either the non-periodic version or the periodic version suffices for the prize.) For the other ones, a counterexample may be awarded the prize, or may get a smaller prize, or nothing; it’s up to them.
6 February, 2014 at 11:56 am
Felipe Voloch
Thanks for the corrections. I was misinformed by Jaffe. Since I asked him that at the end of one of the Millennium lectures at UT, his answer may be on tape. https://www.ma.utexas.edu/millenium_site/mlectures.html
12 February, 2014 at 6:19 am
David Brown
How long will the Clay Mathematics Institute survive?
“The average life expectancy of a multinational corportation—Fortune 500 or its equivalent—is between 40 and 50 years.” http://www.businessweek.com/chapter/degeus.htm
7 February, 2014 at 4:37 pm
Terence Tao
I think there’s a limit to how much one can deduce from “dogs that don’t bark in the night” :-). The paper reflects my current thinking on the subject, which is that (a) proving global regularity for Navier-Stokes is a hopeless task for the foreseeable future, but (b) proving blowup for Navier-Stokes is not. (This is a shift from my previous viewpoint that (a) and the negation of (b) were both true; it was only through the course of trying to formalise (a) that I was able to glimpse a possible route to (b) that I didn’t see before.)
But there is still quite a long way to go to actually reach a proof of blowup for Navier-Stokes. Most obviously, we currently don’t have any designs for logic gates made out of pure fluid (although the pointer to the fluidics literature that Arie Israel makes below is extremely intriguing), whereas in the averaged Navier-Stokes equation I could “bake in” these gates into the laws of physics by fiat. I certainly intend to look at these issues further, but I can’t predict what will come out of this program yet.
6 February, 2014 at 12:13 pm
Gil Kalai
Dear Terry, you draw the analogy with Conway’s game of life which allows universal computation.Can you show such universal computation for the (dyadic?) variants of NS you considered in the paper? One apparatus that can be helpful is the fault-tolerance apparatus (which also goes back to von-Newmann) which allows universal computation for noisy logical gates.
(Appropos that, to the best of my knowledge it is not known if noisy game of life allows universal computation http://cstheory.stackexchange.com/questions/17914/does-a-noisy-version-of-conways-game-of-life-support-universal-computation .)
7 February, 2014 at 8:08 am
Anonymous
Gil, why is error correction important? It’s needed for practical quantum computation because the qubits can’t be initialized perfectly. But for the purely mathematical NS problem, isn’t it enough if there’s some set of initial conditions (i.e. of measure 0) where the computation goes through? For that matter, it seems odd that the Millenium problem calls for an explicit counterexample in the case of a negative answer. Maybe there’s a blowup that only has a non-constructive existence proof. What then?
7 February, 2014 at 4:50 pm
Terence Tao
One would have to be precise about the computational model, but I do have the impression that if one was allowed to chain together an unlimited number of ‘quadratic gates’ together connecting an unlimited number of modes (i.e. to solve arbitrary quadratic ODE
, subject to the energy conservation law
), one could then perform an essentially Turing-complete set of (continuous) computing tasks. This is reasoning in analogy with systems of quadratic equations over, say,
; one can convert any bounded degree algebraic system of equations over this field into a system of quadratic equations by expanding the number of variables by a bounded amount. In particular, I think one can already show that solving quadratic equations over finite fields is NP-complete (it’s pretty close to 3-SAT, for instance).
Some noise tolerance is going to be needed, because it would be hopeless to expect the von Neumann machine to create a perfect rescaled version of itself, while completely deleting all previous traces of itself. (In particular, the rescaling we are using does not preserve the viscosity term (it is supercritical in that regard), so one cannot expect perfect self-replication.) However, it may well be that by choosing parameters appropriately that the noise tolerance could be obtained by PDE methods (e.g. Gronwall inequality) rather than by deliberately encoding error correction into the software; this is what happened in the averaged NS model I considered (although it took a while to figure out exactly how to select the parameters and to control the noise levels, leading to a rather lengthy and complicated bootstrap argument in the paper). I have a vague hope that if one can make the fluidic circuitry be based on “digital” signals rather than “analog” ones, then this will automatically give a certain degree of noise tolerance (the same way that physical electronic computers are somewhat resistant to low levels of external radiation due to the digital nature of the signals) and this may be all that is needed for the purposes of creating the von Neumann machine that blows up in finite time.
8 February, 2014 at 1:51 pm
Anonymous
Thanks… the Wikipedia article on fluidics was interesting and I didn’t realize they were that computationally powerful. But I thought fluidic devices (such as automatic transmissions in old cars) involved putting carefully designed physical obstructions in the moving fluid: if the NS problem is supposed to be obstruction-free then can those techniques still work? Could there be a way to use solitons (since they are self-reinforcing) to communicate between stages, like gliders in Conway’s Life game, instead of fancy digital error correction? As probably shows, I’m pretty ignorant about this general topic, and don’t know if solitons even arise in NS.
10 February, 2014 at 9:28 am
Terence Tao
Solitons (or at least travelling waves) are a possibility, although stability of these solutions will be an issue.
It’s true that before one can use the fluidics literature, one has to first solve the “materials science” problem of constructing materials out of pure ideal fluid which can function as the physical obstructions used in fluidic gates. This looks practically impossible, but perhaps not mathematically impossible: if for instance one can create vortex sheets of extremely high vorticity that are reasonably stable, and not penetrable by lower-vorticity streams of fluids, then they might be able to functionally serve as the walls and other obstructions of fluidic gates. (There is a price one pays for using such fancy fluid formations in one’s machine, though, which is that the task of constructing the replica of this machine becomes more difficult. Still, this feels like it is “merely” an extremely difficult engineering problem, rather than a fundamental obstruction.)
13 February, 2014 at 5:20 am
David Brown
“… if for instance one can create vortex sheets of extremely high vorticity that that are reasonably stable, and not penetrable by lower-vorticity streams of fluids, then they might be able to serve as the walls and other obstructions of fluidic gates. …” Could it be of some value to look into MHD solutions? If there are ideal fluid vortices that could approximate an infinite number of electron energy-density levels, then perhaps spintronics could suggest a way of approximating ideal fluidic gates. http://en.wikipedia.org/wiki/Spintronics
4 March, 2014 at 9:55 am
Jay
Suppose we could construct a bunch of fluidic gates from water and some material we can move and shrink at will (say for 2-4 orders of magnitude).
How should we arrange the gates so that it looks like a physical proof of concept for your mathematical proposal?
4 March, 2014 at 10:12 am
Terence Tao
Well, there would be a large variety of possible designs (cf. the many possible ways to design “spaceships” in the Game of Life, with many of the larger designs using more primitive objects such as “glider guns” as an analogue of the hypothetical fluidic logic gates considered here). The conservation laws of the Euler equations, particularly conservation of energy, however provide a significant additional challenge which is not present in the Game of Life (in which there is no upper bound on the number of active cells one can generate).
In analogy with the constructions in my paper, once one has enough gates to build reliable and programmable machines, one could imagine a design that consists of a large, slow machine
whose primary purpose is to create a tiny, fast, and low-energy machine
, which then dismantles (and cannibalises) the large machine
to create a smaller copy
of that large machine, which holds a large fraction (say 99%) of the original energy of
. The smaller machine
then “turns on” the smaller copy
to repeat the process, and then moves away from that copy
(at which point we don’t care too much what happens next to
, though it may be a good idea to put in some sort of “self-destruct” mechanism into B’s programming to guarantee that it doesn’t come back to disrupt the dynamics). The process them repeats itself, with the majority of the energy shifting itself to the next finer scale at increasingly rapid scales. As long as the energy transfer process is efficient enough, one should be able to “outrun” the effects of viscosity, which can then be treated as a negligible error (it becomes weaker at an exponential rate as one moves from one scale to the next, while renormalising the dynamics appropriately).
4 March, 2014 at 10:55 am
Jay
Is there any garantee that, if you can construct A, there’s a machine B that can dismantle A?
4 March, 2014 at 11:01 am
Terence Tao
Well, if B has sufficiently advanced programming, I believe this is possible in principle, particularly if it is possible to first “turn A off”, effectively converting A into a collection of much smaller, mostly inert, components. In particular, even though A is much more massive than B, each individual component of A could be a lot smaller than B, so B could work on deactivating and then reassembling individual components of A one at a time.
Of course, the task of actually engineering the required hardware and software for these machines would be enormously difficult, but I don’t see why it is necessarily impossible.
4 March, 2014 at 11:07 am
Jay
Yep, if A was inert that sounds “easy”. But what if we can’t turn A off? Actually, it would most probably need to be self-correcting, no?
4 March, 2014 at 11:22 am
Jay
(Ok, let’s just add a backdoor A could not correct for)
Thank you very much for your answers!
5 March, 2014 at 7:03 pm
Anonymous
I notice that the Clay problem statement has a force term, that we think of as constant (like gravity) but it’s written as completely parametrized in time and space, so it can do anything it wants, subject to its Jacobian decaying at superpolynomial (but maybe only slightly superpolynomial) speed. So if the von Neumann machine can scale itself down at superpolynomial speed asymtotically faster than the force decay, maybe the force term can be used to “operate the machinery”, i.e. control errors by nudging stuff back into place as needed? As the force decays more slowly than the machine scales down, it becomes more and more powerful (relative to the machine) as time increases.
Am I reading that right? Is the force constraint really given too loosely to resemble a physical system? Your paper doesn’t seem to use the force term, but maybe I missed it.
5 March, 2014 at 7:41 pm
Terence Tao
In the global regularity problem, the forcing term is also required to be smooth as well as decaying in space; in particular, all derivatives of the forcing term remain bounded. Because of this, the forcing term is not much use for directly manipulating the fine-scale dynamics of a solution; in fact, at fine scales, the strength of this term is even smaller than the viscosity term, which is already being treated perturbatively. (However, the forcing term can be used to deal with coarse scale components of the solution, and as such is useful for such tasks as passing back and forth between periodic and non-periodic formulations of the Navier-Stokes problem; I exploited this fact in a previous paper on Navier-Stokes.)
For related reasons, fine tuning the initial data at fine scales will also not help in engineering blowup: by the time the blowup machine actually gets to such fine scales, the fine scale components of the initial data would have long since dissipated away. The blowup mechanism has to be completely endogenous at fine scales: the initial data and/or forcing term can set things up at the initial scale, but after that the solution has to “blow itself up” rather than rely on data or forcing term.
6 February, 2014 at 10:28 pm
Gil Kalai
The question if NS evolutions in three dimensions and in two dimensions support universal classical computation and classical fault tolerance was also raised in my debate with Aram Harrow regarding quantum computation in this comment (to the seventh post). NS equations were also considered a couple of times earlier in the debate by John Sidles. The context was the question of how to define classical processes “without classical fault-tolerance.” A specific question that was asked is if 2-dimensional Navier-Stokes evolutions can be approximated (in all scales) by bounded depth (probabilistic) circuits? (Or at least is it the case that they do not support universal classical computation.)
See also this earlier comment there speculating a connection between the computational complexity/fault tolerance capabilities of a class of classic evolutions and questions about regularity, well-posedness Maxwell daemons and other self-defeating behavior. The comments and threads following these comments contains further interesting links: To a paper by Andy Yau “Classical physics and the Curch-Turing Thesis,” to an MO question by Mariano Suárez-Alvarez, and to an earlier post on self-defeating behavior over this blog.
7 February, 2014 at 11:34 am
Arie Israel
Gil, I was surprised to learn that experimental fluid mechanics people had thought of this analogy before. Apparently the key name is ‘Fluidics’ and those ideas date back at least to the sixties. Not sure what is the state of the art. Additionally, early electrical engineers believed that electricity followed laws similar to fluid dynamics. This is called the ‘hydraulic analogy’. Historically, that’s where the word ‘current’ comes from. All this can be found on Wikipedia.
9 February, 2014 at 6:03 pm
Anonymous
So the ideas have been around for more than 50 years.
7 February, 2014 at 3:24 am
Navier-Stokes Fluid Computers | Combinatorics and more
[…] Tao posted a very intriguing post on the Navier-Stokes equation, based on a recently uploaded paper Finite time blowup for an […]
7 February, 2014 at 1:48 pm
kurt
Well, I thought this was clear when looking at the projections
$u_\theta=\langle u,\theta\rangle,\;\theta \in S^2$ and writing NS as
$$\partial_t u_\theta + div(u_\theta u -\nu \nabla u_\theta +\theta p)=0$$
followed by some estimates on the vector field $X=u_\theta u -\nu \nabla u_\theta +\theta p$ using linear theory for the scalar functions $u_\theta$,
(\vert\vert\nabla u_\theta + u_\theta X\vert\vert = ..).
But I’m certainly wrong as I abandoned NS a long time ago :)
At any rate a great result. Thanks.
8 February, 2014 at 1:14 am
Anonymous
Typographical comment: For inner products, one should use
\usepackage{mathtools}
\DeclarePairedDelimiter{\inner}{\langle}{\rangle}
Then “\inner{v,v}” will produce “
”.
8 February, 2014 at 2:18 am
Anonymous
This was a comment to the paper—not your blogpost. :)
8 February, 2014 at 4:18 am
MrCactu5 (@MonsieurCactus)
You can build computers out of many things. I am impressed you can build a computer out of the Navier-Stokes equation.
A long time ago, I read this paper MineSweeper is NP Complete. Nobody plays Minesweeper anymore — instead they play Candy Crush Saga.
When I solve Sudoku’s I always imagine the numbers moving across the page. It is not that effective on the harder puzzles though.
8 February, 2014 at 5:43 am
Anonymous
Another comment: For maps, one should use “\colon” to get the correct horizontal spacing. A possibility is to envoke
\newcommand*\map[3]{#1\colon #2\to #3}
in the preamble and then use, say “\map{f}{A}{B}” to get “
”.
8 February, 2014 at 1:21 pm
John Sidles
The computational potentiality of Navier-Stokes flow, and its relation to open questions in quantum dynamical simulation, both were discussed as long ago as 2008, in the context of Scott Aaronson’s lecture Quantum Computing Since Democritus Lecture 13: How Big Are Quantum States (per comment #45 of Scott’s post).
It’s *WONDERFUL* to see that high-level mathematical techniques and creativity now are being focused upon these tough-but-crucial questions, which are important equally to mathematicians, scientists, engineers … and (in the long run) even medical researchers like me.
Please accept my appreciation of these fine new results, my thanks for the work they represent, and my sincere hopes for sustained progress in this fascinating and far-reaching line of research.
9 February, 2014 at 9:19 pm
Anonymous
Tao is unable to prove the Navier Stokes global regularity in the foreseeable future. That’s all folks!
9 February, 2014 at 9:52 pm
Me
Dear Terry
The analogy with electrical engineering is truly fascinating.
I was wondering what would be the obstruction to applying your ideas in 2D ? (where Navier-Stokes and Euler equations are known to have global regular solutions). In other words, could we build in 2D such quadratics circuits that quickly replicate signals to finer and finer scales ? Maybe in 2D dimensions you get less X_n modes to play with for a given n, and so the circuits you can build are smaller
Thanks
10 February, 2014 at 8:54 am
Terence Tao
See my answer to a similar question at http://terrytao.wordpress.com/2007/03/18/why-global-regularity-for-navier-stokes-is-hard/#comment-270129 . Basically, in 2D the dissipation term is much stronger, and I don’t think my construction adapts to 2D Navier-Stokes. It is perhaps possible that one could modify the methods to create an averaged 2D Euler-type equation which exhibits rapid growth for a fixed amount of time (similar to a result I had worked out with Colliander, Keel, Takaoka and Staffilani for 2D periodic cubic NLS, see http://terrytao.wordpress.com/2008/08/14/weakly-turbulent-solutions-for-the-cubic-defocusing-nonlinear-schrodinger-equation/ ), but over long enough time, all the error terms will eventually pile up and prevent any accurate analysis of the situation. (With finite time blowup, there is not enough time for the low frequency modes to cause much mischief as the von Neumann machine replicates to ever finer scales, but I don’t see how to deal with these modes for arbitrary amounts of time. In particular,in the absence of viscosity, there is the bizarre but not entirely impossible scenario in which the low frequency errors eventually manage to spontaneously form into their own von Neumann machine, which is “faster” than the original one and can “intercept” and then “disable” the original machine, halting the cascade to finer and finer scales.)
11 February, 2014 at 3:06 am
Anonymous
Hi Terry Tao,
I have some comments and queries. All equation numbers refer to arXiv:1402.0290. (a) For large initial data and for long-time solutions, equations (1.1) and (1.5) have not been shown to be equivalent.
A “blowup” is a weak solution. (b) The NS equations are locally well-posed (or globally for small data) for t in [0, t_a], where t_a depends on the size of initial data. Within Schwartz class, infinitely many initial flows, u_0(x), can be specified. Over [0, t_a], (1.9) is, if not false, irrelevant to the Navier-Stokes (1.1). (c) Initial value problem (1.9) defines a new set of equations for fluid motions but describes a stochastic field which is already in existence. In view of (b), when/where do the stochastic characters of (1.9) originate? (d) Denote your blowup time by t_b. Following (b) and (c), IVP (1.9) with u_0(x) is irrelevant to the Navier-Stokes (1.1) for any t in (t_a,t_b]. (e) Given any *finite-energy* initial data (Schwartz), what is the implication in physics for the finite-time blowup in a *stochastic field*? What happens to the field *beyond* t_b?
Thanks
11 February, 2014 at 8:21 am
Terence Tao
(a) The precise sense in which I show (1.1) and (1.5) to be equivalent is stated and proved in Lemma 1.3.
(b)-(d). It is true that the averaged Navier-Stokes equation (1.9) (which, by the way, is a deterministic equation, not a stochastic one, as the probabilistic (or averaging) variable in the definition of
is integrated out) is not directly related to the true Navier-Stokes flow (1.1) (or its equivalent form (1.5)). So the results of this paper do not directly say anything about the true Navier-Stokes equations. However, as described in the introduction, a blowup result for the averaged Navier-Stokes equation does create a barrier to certain strategies for proving global regularity for the true Navier-Stokes equation, in that any such strategy must be capable of distinguishing (1.1) from (1.9) if it is to have any chance to work. Many proposed strategies for establishing global regularity for true Navier-Stokes (including some that were proposed very recently) fail this test and can thus be excluded as viable strategies.
14 February, 2014 at 3:09 am
Anonymous
It is a salient point that eqn (1.5) is a DERIVED equation from (1.1). Close to blow-up time t_b, u unnecessarily stays in L^1 for example. The velocity Hessian of mild solutions tends to be unbounded throughout R^3. Therefore it is invalid to invert the pressure Laplacian without knowledge of velocity decays at infinity. In other words, (1.5) may well become an increasingly useless substitution for (1.1) as time approaches t_b. Above all, (1.5) must be fully justified to exist for blow-ups. The generalisation from (1.5) to (1.9) would not have a well-defined meaning, particularly for solutions involving finite-time singularities. Wherever potential blow-ups are implied, such as the Euler in S1.3, this line of reasoning has ramifications.
14 February, 2014 at 8:34 am
Terence Tao
The justification of the equivalence of the global regularity problems (or equivalently after taking contrapositives, the blowup problems) for (1.1) and (1.5) is indeed non-trivial, and occupies a significant portion of my previous paper http://msp.org/apde/2013/6-1/p02.xhtml . As I said in my previous comment, the precise sense in which I show (1.1) and (1.5) to be equivalent is stated and proved in Lemma 1.3 (which relies heavily on the results of that previous paper).
11 February, 2014 at 1:27 pm
Marcelo de Almeida
Reblogged this on Being simple.
12 February, 2014 at 8:55 am
friend48
Prof. Tao,
concerning your final remark
Many proposed strategies for establishing global regularity for true Navier-Stokes (including some that were proposed very recently) fail this test and can thus be excluded as viable strategies.
Your great paper does not formally exclude as viable strategy the recent paper by Otelbaev, since in the abstract part the latter contains an extra condition for the main linear operator (the Laplacian, for the NS), the condition that is not present in your paper.
The condition requires that the lowest eigenvalue be isolated. This is correct for the peridic problem, but wrong for the problem for the everaged NS in the whole space. Moreover, it is not that easily visible, how your construction can be adapted to the periodic case, since it uses the everaging over rotations, this one being prevented by the geometry of the torus..
12 February, 2014 at 9:07 am
Terence Tao
My paper is set in the non-periodic setting for technical convenience, but one can transfer from the non-periodic setting to the periodic setting in a number of ways. For instance, in this previous paper of mine I showed that global regularity for the homogeneous non-periodic Navier-Stokes problem follows from global regularity for the inhomogeneous periodic Navier-Stokes problem with
forcing term, and so any obstruction to solving the former problem also gives an obstruction to the latter. Alternatively, one can take the local cascade operators
in my current paper and adapt them to the periodic setting by throwing away all negative frequency scales
(which were never actually excited by the local cascade evolution
in any event) and restricting the frequency variable to be integer. The resulting periodic equation has essentially the same dynamics as the non-periodic cascade equation; it is no longer an average of the periodic Navier-Stokes equation (since, as you say, rotations are no longer directly available on the torus), but the periodic local cascade operator still obeys essentially the same estimates as the periodic Euler operator, because the non-periodic version of the former is still an average of the non-periodic version of the latter, and essentially all periodic estimates one uses on these operators can be derived from their non-periodic counterparts (together with estimates that exploit the compactness of the domain, e.g. Holder’s inequality).
12 February, 2014 at 8:56 am
friend48
the lowest eigenvalue…..
I meant, of course, the lowest point of the spectrum.
15 February, 2014 at 12:33 am
Anonymous
Otelbaev attempts to solve *a* NS problem which demands additional symmetry (via periodic BC’s) and his flow is assumed to satisfy further integral constraints. He deals with a pde system (with specific IC/BC’s) which has an essentially DIFFERENT formulation compared to the present periodic settings. We cannot say that his proof is free from inconsistencies and he breaks new ground. The point is: implications concluded from a NS-irrelevant pde throw little new light on how to overcome the NS difficulties.
14 February, 2014 at 3:48 am
jussilindgren
What about the following argument:
It seems to me that the following geometric argument is sufficient to ensure smooth solutions:
Given that the question of regularity depends on the behaviour of the volume integral (over the whole space) of the following scalar product:
By utilising the scalar triple product, one can easily see that this term depends only the mutually perpendicular parts of the velocity, vorticity and curl of vorticity fields (as swapping the order of the terms using scalar triple product always kills the parallel part in the cross product). Then we can reduce the question of regularity into such fields where these three fields are mutually perpendicular. But on the other hand using the divergence theorem, it is clear that the enstrophy is identically the volume integral of
as the component which is perpendicular to the curl of vorticity is killed by the scalar product.
So, in other words for such perpendicular fields the enstrophy is identically zero and so the solution is regular. This means that the solutions must be regular for all fields, as adding a non-perpendicular parts to the fields do not change the critical integral.
The whole story is at my blog: http://navierstokesregularity.wordpress.com/
14 February, 2014 at 10:04 am
friend48
jussilindgren:What about the following argument:……….
Wouldn’t it be polite and professional to start your post by admittng that, yes, your latest ‘proof’ is wrong, and what is you are wrtig now is not a continuation of this latest one but something completely new.
14 February, 2014 at 1:20 pm
Anonymous
indeed, I fully agree with you friend48. Sorry. I don’t care about the millions, I just care about whether we can could establish some certainty. Would be fantastic, if we knew the limits of computers.
15 February, 2014 at 10:43 am
friend48
jussilindgren:”Then we can reduce the question of regularity into such fields where these three fields are mutually perpendicular. ”
——————–
This statement is not proved. You should have described the process of your reduction.You did not do this, moreover, you cannot do this.
24 February, 2014 at 4:31 pm
More Quick Links | Not Even Wrong
[…] Tao has some new ideas about the Navier-Stokes equation. See his blog here, a paper here, and a story by Erica Klarreich at Quanta […]
24 February, 2014 at 7:13 pm
weather_or_not
So does you work have any bearing on long-term climate predictions?
25 February, 2014 at 8:43 am
none
I’d say this has no implications at all since it’s about transferring energy to finer and finer scales without limit. The NS equations for air or water (i.e. in physics) break down at the molecular scale.
25 February, 2014 at 12:08 pm
Conserved quantities for the Euler equations | What's new
[…] Euler equations are the inviscid limit of the Navier-Stokes equations; as discussed in my previous post, one potential route to establishing finite time blowup for the latter equations when is to be […]
1 March, 2014 at 3:04 pm
Navier Stokes looks like its gonna blow « Pink Iguana
[…] Tao has some new ideas about the Navier-Stokes equation. See his blog here, a paper here, and a story by Erica Klarreich at […]
2 March, 2014 at 3:01 pm
La mystérieuse équation de Navier-Stokes | Science étonnante
[…] Un autre billet de Terry Tao qui semble doucher les espoirs kazakhs […]
2 March, 2014 at 3:27 pm
Shtetl-Optimized » Blog Archive » Recent papers by Susskind and Tao illustrate the long reach of computation
[…] see that, in blog comments here and here, Tao says that the crucial difference between the 2- and 3-dimensional Navier-Stokes […]
4 March, 2014 at 1:01 am
Anonymous
Prof. Tao, do you think it’s feasible and interesting, for someone to do a numerical simulation of the blowup in this result, making some nice pictures, or is there just too much “stuff” going on for simulation to be practical? Thanks.
4 March, 2014 at 10:23 am
Terence Tao
The five-dimensional ODE in Section 5.5 of my paper, which models a single stage of the energy transition process, should be solvable numerically for reasonable choices of the parameters
; the dynamics should then look like the schematic depicted in Figure 6 of my paper. One could then try to chain several of these ODE together to give a system similar to (6.3)-(6.6), which describes the full blowup dynamics.
6 March, 2014 at 9:07 am
arch1
Maybe I’m reading too much into the “fluid computer” analogy, but-
In order to programmably self-replicate, it seems that the Navier-Stokes computer’s hardware would need to support not only computation, but also at least some minimal fabrication primitives. If so, would the latter capability somehow come for free with Navier-Stokes based computation (which I understand is itself still just a glimmer), or must it be explicitly designed in?
6 March, 2014 at 9:18 am
Terence Tao
Yes, this will have to be done also; strictly speaking one needs a “universal constructor” in addition to a “universal computer” in order to build a self-replicating machine by this route. But given that the computer will literally be (fluid) mechanical in nature, I expect the “fabrication primitives” to be not so different actually from the “logic gate primitives” that are also needed, and if one can engineer the latter then it is reasonable to expect that one can also engineer the former. (This is not to say that the task is trivial: after all, we still have not engineered a true von Neumann machine in the real world, despite having more or less perfected the art of computation. On the other hand, in Conway’s game of life, my understanding is that the tasks of building a universal computer and of building a universal constructor, while technically different, were of comparable levels of difficulty.)
6 March, 2014 at 12:29 pm
arch1
I see, thanks.
11 March, 2014 at 8:06 pm
arch1
Do physics-of-computation results such as Landauer’s energy minimum for bit erasure have any relevance to the potential N-S blowup scenario sketched here?
12 March, 2014 at 7:02 am
Terence Tao
Perhaps not directly, but the second law of thermodynamics is certainly a concern, as it would suggest that the total disorder present in a fluid machine will tend to increase over time. One way to ameliorate this is to try to rely on reversible computing, but I think the more promising route is simply to accept the increase in total disorder, and try instead to reduce local disorder, so that a von Neumann machine
can create a near-perfect replica
of itself at a smaller scale (and with smaller energy and, hopefully, smaller entropy too), at the cost of leaving behind a discarded and highly disordered remnant of
that is absorbing the entropy (but which has been moved sufficiently far from
in either physical space or frequency space so as not to disrupt the remaining dynamics).
Note also that other model equations in physics (e.g. focusing nonlinear Schrodinger equation) can exhibit stable self-similar blowup solutions in finite time, so finite time blowup is not intrinsically in contradiction to the second law of thermodynamics (there is some small radiation dispersing away from the bulk of the self-similar solution which is basically carrying the disorder in the system).
15 March, 2014 at 7:58 am
Could We Have Felt Evidence For SDP ≠ P? | Gödel's Lost Letter and P=NP
[…] the latter, Terry Tao’s recent breakthrough on the Navier-Stokes equations is an example of how much the same ideas keep recirculating, and how […]
10 April, 2014 at 10:14 am
Multiple-Credit Tests | Gödel's Lost Letter and P=NP
[…] Tree” is Terence Tao, and our extrapolation of his new result on Navier-Stokes is a flight of […]
15 April, 2014 at 2:12 am
JOE
I think this is probably the wrong place to put this but I read your previous article ”Does one have to be a genius to do maths?” and I think I must be ailing from one of the condition you spoke of. Now where does the Navier-Stokes problem come in; well I threw much of myself into tackling this one for 2yrs now. I am no proffesional mathematician but I believe I may have cracked something; like I have found the main basis of boundedness in the NSE; smoothness isnt that hard to achieve afterwards.
I wish to publish these findings but I am discouraged by the fact that as an amateur I will be taking down an icon in an already protective scientific community & I dont mean Prof. Otelbaev. Having gone through Otelbayiev’s own work I have found an error consistent with other peoples attempts at attacking the NSE.
Dear Dr Tao, I love your new way of looking at this problem and something interesting hit me! Forgive me for suggesting this but would you mind looking into whether you could fit this idea into galactic clusters, I thought Gamma ray bursts but it would be interesting to see what you might come up with. By the way, can you give me in your proffessional capasity advice on what to do?
Thanks.
16 April, 2014 at 10:03 pm
Gil Kalai
Dear Terry, following is a question about another (rather natural) possible direction from your philosophy but this time towards a positive solution rather than a negative solution for the NS question. Namely, perhaps a key for showing that finite-time blow up is not possible would be by showing (an apparently weaker statement) that the NS/Euler equation do not support “deep” computation. You drew the analogy with quantum computing (and fault-tolerance) and indeed in this area there are several results which asserts that under certain conditions “deep computation” is not available. Limitation on computation immediately leads to a large number of conserved quantities, which is the quantum case are sometimes referred to as “phases of matter.” (Before, I proposed to impose such limitation on computation and to derived conservation laws on top of the equations in order to explain why deep computation is absent from fluids in nature, but it is a possibility that limitation on deep computation can be derived from the equation itself.) On the technical level (on the quantum side) results of this kind usually refer to “gapped systems” (so some spectral gap is assumed) and a prominent technical tool that is used (with much success in the last decade) is the Lieb-Robinson inequality. (See, e.g. this review paper.) I don’t know if spectral gaps and Lieb-Robinson have some analogs for the NS/Euler evolutions. Anyway, this is a direction worth noting.
17 April, 2014 at 9:22 am
Terence Tao
My personal feeling is that because the Navier-Stokes/Euler equations are nonlinear rather than linear, there will be far fewer barriers to computation than in the quantum setting, and that one should be able to almost freely traverse the “energy surface” of state space coming from the various conserved quantities of the Euler equations (energy, momentum, angular momentum, circulation, helicity). (Well, I’m not 100% sure about circulation, because this is essentially a pointwise conservation law (conservation of the vorticity 2-form) and could potentially be a serious barrier to how freely the state can evolve, but the other conservation laws at least do not seem to constrain the dynamics to the extent that they preclude computation.)
28 June, 2014 at 12:40 pm
Gil Kalai
It seems (but I am not sure about it at all) that perhaps the crux of the matter for supporting a von Neumann machine is to show that NS supports computation of arbitrary depth. (Namely that there is no absolute bound on the depth of the computation independent of the number of bits you can use, and that you do not need arbitrary long computation with fixed number of n bits). If this is the case this is good news in the sense that the mathematical distinction between bounded depth computation and computation beyond bounded depth is much more definite and clear compared to other computational complexity issues. So a crucial question is if you can implement with a NS-machine something like the “majority function.” (If NS supports only bounded depth computation (is it the case for D=2?), I don’t know if this has direct consequences on the regularity conjectures but at least it will imply (by a theorem of Green) Mobius randomness. :) )
26 April, 2014 at 7:28 am
Viktor Ivanov
Global regularity is proven in my paper A SOLUTION OF THE 3D NAVIER-STOKES PROBLEM published in Int. Journal of Pure and Applied Mathematics, Vol. 91 No. 3 2014, 321-328,
28 May, 2014 at 8:05 pm
Baiamos
But what about Otelbayev proof?
25 June, 2014 at 1:03 am
So what happened to the abc conjecture and Navier-Stokes? | The Aperiodical
[…] the arXiv, which states some limits on what a solution to Navier-Stokes can look like, but comments on his blog post about the paper say that it doesn’t rule out Otelbaev’s […]
22 August, 2014 at 4:44 am
Un ordinateur liquide | www.Affectueusement.Biz
[…] http://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier… (dernier paragraphe) […]
29 August, 2014 at 2:21 pm
Links For February - My blog
[…] Tao, whom I like to admire from afar, has posted what is maybe a takedown of Otelbaev’s claimed proof of Navier-Stokes, but the best part? […]
30 August, 2014 at 1:12 am
The Other Clay Maths Problem | Ajit Jadhav's Weblog
[…] That, indeed, turns out to have been the actual case. Terry Tao didn’t directly tackle the Clay Maths problem itself. See the Simons Foundation’s original coverage here [^], or the San Francisco-based Scientific American’s copy-paste job, here [^]. What Terry instead did is to pose a similar, and related, problem, and then solved it [^]. […]
18 September, 2014 at 10:03 am
Fred Chapman (@fwchapman)
Prof. Tao, I enjoyed your lecture on this topic at Lehigh University last week. I have a follow-up question.
I am intrigued by your idea of building a Turing-complete computer out of water. If you can do this, it would be of considerable interest in its own right!
Are you familiar with Stephen Wolfram’s work on universal Turing machines in his “New Kind of Science” (NKS) research initiative? Could there be some useful connections between various NKS representations of universal Turing machines and the machine you want to build using fluid dynamics?
In 2007, Wolfram awarded a prize to Alex Smith for proving that a particular Turing machine with 2 states and 3 symbols (colors) is universal. This may be the simplest universal Turing machine that exists. Here’s more info:
http://www.wolframscience.com/prizes/tm23/
http://en.wikipedia.org/wiki/Wolfram's_2-state_3-symbol_Turing_machine
Wishing you success,
Fred Chapman
Bethlehem, PA
18 September, 2014 at 11:05 am
Fred Chapman (@fwchapman)
P.S. When Wolfram was at the Institute for Advanced Study in 1983-1986, he simulated physical processes like turbulent fluid flow using cellular automata. Would it be fruitful to collaborate with Wolfram on your approach to Navier-Stokes via fluid-mechanical universal Turing machines?
http://en.wikipedia.org/wiki/Stephen_Wolfram#Complex_systems_and_cellular_automata