Dark matter filaments. Computer simulation. [Image: John Dubinski (U of Toronto)] |
General relativity does not tell us what is going on.
Physicists have attributed these puzzling observations to two newly postulated substances: Dark matter and dark energy. These two names are merely placeholders in Einstein’s original equations; their sole purpose is to remove the mismatch between prediction and observation.
This is not a new story. We have had evidence for dark matter since the 1930s, and dark energy was on the radar already in the 1990. Both have since occupied thousands of physicists with attempts to explain just what we are dealing with: Is dark matter a particle, and if so what type, and how can we measure it? If it is not a particle, then what do we change about general relativity to fix the discrepancy with measurements? Is dark energy maybe a new type of field? Is it, too, made of some particle? Does dark matter have something to do with dark energy or are the two unrelated?
To answer these questions, hundreds of hypotheses have been proposed, conferences have been held, careers have been made – but here we are, in 2019, and we still don’t know.
Bad enough, you may say, but the thing that really keeps me up at night is this: Maybe all these thousands of physicists are simply using the wrong equations. I don’t mean that general relativity needs to be modified. I mean that we incorrectly use the equations of general relativity to begin with.
The issue is this. General relativity relates the curvature of space and time to the sources of matter and energy. Put in a distribution of matter and energy at any one moment of time, and the equations tell you what space and time do in response, and how the matter must move according to this response.
But general relativity is a non-linear theory. This means, loosely speaking, that gravity gravitates. More concretely, it means that if you have two solutions to the equations and you take their sum, this sum will not also be a solution.
Now, what we do when we want to explain what a galaxy does, or a galaxy cluster, or even the whole universe, is not to plug the matter and energy of every single planet and star into the equations. This would be computationally unfeasible. Instead, we use an average of matter and energy, and use that as the source for gravity.
Needless to say, taking an average on one side of the equation requires that you also take an average on the other side. But since the gravitational part is non-linear, this will not give you the same equations that we use for the solar system: The average of a function of a variable is not the same as the function of the average of the variable. We know it’s not. But whenever we use general relativity on large scales, we assume that this is the case.
So, we know that strictly speaking the equations we use are wrong. The big question is, then, just how wrong are they?
Nosy students who ask this question are usually told these equations are not very wrong and are good to use. The argument goes that the difference between the equation we use and the equation we should use is negligible because gravity is weak in all these cases.
But if you look at the literature somewhat closer, then this argument has been questioned. And these questions have been questioned. And the questioning questions have been questioned. And the debate has remained unsettled until today.
That it is difficult to average non-linear equations is of course not a problem specific to cosmology. It’s a difficulty that condensed matter physicists have to deal with all the time, and it’s a major headache also for climate scientists. These scientists have a variety of techniques to derive the correct equations, but unfortunately the known methods do not easily carry over to general relativity because they do not respect the symmetries of Einstein’s theory.
It’s admittedly an unsexy research topic. It’s technical and tedious and most physicists ignore it. And so, while there are thousands of physicists who simply assume that the correction-terms from averaging are negligible, there are merely two dozen or so people trying to make sure that this assumption is actually correct.
Given how much brain-power physicists have spent on trying to figure out what dark matter and dark energy is, I think it would be a good idea to definitely settle the question whether it is anything at all. At the very least, I would sleep better.
Further reading: Does the growth of structure affect our dynamical models of the universe? The averaging, backreaction and fitting problems in cosmology, by Chris Clarkson, George Ellis, Julien Larena, and Obinna Umeh. Rept. Prog. Phys. 74 (2011) 112901, arXiv:1109.2314 [astro-ph.CO].
Definitely this is something I face in mathematical biology/biophysics - all of the 'microscopic' interactions are highly nonlinear, but the field is dominated by phenomenological models which correspond to averages in some sense, and a huge amount of these complexities are lost. Surprisingly this seems to work in some cases, but in a lot of cases it doesn't, and unpicking this is something I've been trying to look into quite a bit lately.
ReplyDeleteIn General Relativity, have people looked at "large and complex" toy Universes which are still tractable to simulate but have at least a bit more complexity to them than single objects in isolation? Could we learn how big these errors in averaging can be at least just by sufficiently large/complex matter distributions in simulation?
Andrew,
DeleteIn principle you can approach the problem numerically. I know that there have been a few small studies but with inconclusive results. I mean, this wouldn't actually solve the problem in the sense that we wouldn't know how to analytically write down the average, but we would know how large the correction terms are. Numerical GR is tough, though, and it's computationally intensive. It's not something that anyone is going to do on the side just for the fun of it.
@Sabine: Yes, this was my thought too, take a three body system of some sort where averaging can be applied, and see if there is discrepancy in the outcomes by treating them individually, versus in three pairs of two bodies. Or grid a galaxy, with circular and radial dividers like a dart-throwing target, put a mass in each grid similar to our galaxy, and see what happens.
DeleteIf there is going to be a discrepancy, I'd imagine it shows up in a relatively small number of masses and fairly quickly. Just numerically proving there is a discrepancy would be a breakthrough, just as a single counter-example to an assumption kills it.
Then somebody can get time and money to see if dark energy and/or dark matter might just evaporate completely.
Wouldn't a quantum computer be able to compute this, say in 10-20 years and settle the issue? Or is that mathmaticlaly naive?
DeleteSabine wrote:
Delete“That it is difficult to average non-linear equations is of course not a problem specific to cosmology. It’s a difficulty that condensed matter physicists have to deal with all the time, and it’s a major headache also for climate scientists. These scientists have a variety of techniques to derive the correct equations, but unfortunately the known methods do not easily carry over to general relativity because they do not respect the symmetries of Einstein’s theory.”
There is another field of science that deals with non-linear equations, solving the time evolution of two black holes collapse depends on the non-linear terms of GR. There are many researchers that study these type of problem.
I don’t know if any of them are interested in cosmology, but they might be the most equipped to tackle such questions.
@Kelly: as I understand it, a quantum computer (QC) offers no particular advantages over a classical one, when it comes to General Relativity. This is quite different than for (many) condensed matter physics problems: by its very nature, a QC can (in principle) simulate any quantum system, such as protein folding, topological superconductors, and how metallic hydrogen forms.
DeleteAre these nonlinearities characteristic of dimension 4 or could one do a simulation in lower dimensions?
DeleteMarkus,
DeleteThe non-linearities themselves are not specific to the number of dimensions, but their scale-dependence probably is (at least that's the case in other systems).
Intriguing!
ReplyDeleteCopy Edit suggestions:
ReplyDeletedark energy came on the radar already in the 1990. ->
dark energy was already on the radar in the 1990s.
Nosy students who ask this question get usually told ->
Nosy students who ask this question are usually told
usually told these equations are not very wrong and good to use. ->
usually told these equations are not very wrong and are good to use.
(are not XXX) would apply to both (very wrong) and (good to use); thus "are not
good to use").
As always, many thanks :)
Deletethese equations are not very wrong and good to use.
Delete= these equations are not very wrong and are good to use.
The original sentence is fine because "not very wrong" as a whole has a positive meaning as does "good to use", so "not" doesn't get carried over to "good to use", only "are".
An example with 2 negative qualities: "this correction is not correct and patronising" doesn't imply that it is not patronising.
One more observed, and predicted, phenomenon (applies to Dark Matter only, as far as I know): gravitational lensing. I think this is particularly interesting because, unlike disk galaxy (not ellipticals) rotation curves or galaxy motions in clusters, it has been observed for single compact objects (stars), binaries, and galaxy nuclei and bulges. On top of this, there's both position and time data from the observations.
ReplyDeleteWhat if "gravitational lensing" is nothing more than light refraction in the denser media around big masses? Light refraction is a well-known phenomenon. Around masses there is surely more matter in the space than between the galaxies. So, who has ever calculated the light refraction effect denser media in the space? What is with the light refraction effect in the sun corona? Why there are nowhere calculations of that well-known effect, but everywhere they are talking about the nothing more than speculative "gravitational lensing"?
DeleteWe already know - and have known for a long time - that "light refraction" is inconsistent with the relevant experimental results and observations. For example, refraction is chromatic (i.e. its extent depends on the wavelength of the light) but gravitational lensing is achromatic. And astronomical observations are very clear on this: gravitational lensing is achromatic.
DeleteBetter: you don't have to take my - or anyone else's - word for this, you can do your own, independent verification. Start by downloading relevant u' and z' band FITS from SDSS (they're publicly available, and free), or similar from DECaLS. Then do your own analyses, to confirm that the lensing is achromatic.
JeanTate10:16 AM, October 19, 2019
DeleteWe know that light refraction exists! We know that big masses normally collect matter in its surrounding, e.g. in the form of an atmosphere. We are sure that the dense of the matter in deep space is in the average less than e.g. in galaxies between the stars. We know that this media has to effect light refraction!
So, why there are no calculations about theses effect (concerning the optical effect of the sun corona but also far away massive masses in the space)?
The effect must be there, because we are absolutely sure about that effect. Very much more sure than about the effect of "gravitational lensing". We can see the effect of refraction if we look in a ponds. That effect is well known and well understood and that effect has to be there.
So, why don't anybody calculate that effect?
The universe is a space with areas with more and with less density of matter, dependent of the local conditions. So there HAVE TO BE light refraction. The question is only: how big is that effect.
Has anybody proofed that the effect has no measurable consequences?
The alleged fact, that observations don't detect achromatic behavior of bended light is no argument against light refraction - since light refraction has to be there - with more or less effect!
@weristdas: At one level, your questions are easily answered (the effect is faaaar too small to detect); at another level, they are good questions and deserve good answers. Unfortunately, the comments in this blog are a very poor venue for such. I have created a thread in the ISF, Science section, Excellent discussion of unusual alternative to Dark Matter, where I'd like to walk through some answers (and answer any further questions you may have).
DeleteJeanTate5:01 PM, October 28, 2019
DeleteI can't imagine that the light refraction effect of the sun corona is "faaaar" too small to detect because the effect increases as closer the light passes the sun and therefore the dense of the media increases. So there are areas where the effect has to be strong, yes, "verrrry" strong.
Unfortunately I have no clue what ISF is, and if and how I can participate discussion.
@weristdas: Sorry, the International Skeptics Forum, in the Science, Mathematics, Medicine, and Technology section.
DeleteI find refraction by the Sun's corona interesting, and am happy to discuss it. Just not here, in Comments on a particular blogpost in Bee's blog ...
The "International Skeptics Forum" seems to me to be not of interest.
DeleteI have had a look on it.
OK. Likely my last comment on this, here: what is the density of the corona (it varies, with distance from the Sun's chromosphere)? How much refraction would you expect in a beam of "light" going through the corona? How does the expected refraction vary with wavelength, from radio (1.4 GHz, say) to x-ray (5 ev, say)?
DeleteIf refraction is expected everywhere, what is its approximate magnitude for the Moon's "atmosphere"? How dense is the Moon's atmosphere, close to its surface, compared with the average density of the interstellar medium (in an ordinary elliptical galaxy, say)?
Oops! "5 ev" should be "5 keV" :(
DeleteSabine,
ReplyDeletecould MOND or MOND like effects like Tully Fisher and Stacy McGaugh's RAR be simply the results of GR difficulty to average non-linear equations?
also, does this discussion apply to dark matter and galaxy rotation, or also dark energy or both?
Sabine --
ReplyDelete"It's not something that anyone is going to do on the side just for the fun of it."
Why isn't a large scale simulation important enough to launch a significant effort? Is it really infeasible, or just hard? Could it be crowd-computed? Way cheaper than another telescope, and no indigenous people to object ... Isn't the cost/benefit of a project like this right up your alley?
Why isn't a large scale simulation important enough to launch a significant effort?: I don't know.
DeleteIs it really infeasible, or just hard?: It's certainly hard; it may be infeasible.
Could it be crowd-computed?: Yes. The work to set such thing up would be very considerable.
Way cheaper than another telescope, ...: Some simulations/calculations, certainly. For others of astrophysical and cosmological interest, perhaps not.
Do you know of any somewhat reasonable situation where the nonlinear averaging error can look like a cosmological constant? Given that most of the time, we can even get away with treating gravity as Newtonian, I would be very surprised if at those scales the averaging error has any significance. Even worse, often enough, we even use the formula for spherically symmetric mass distributions (pretending all the inner mass sits in the center).
ReplyDelete"Do you know of any somewhat reasonable situation where the nonlinear averaging error can look like a cosmological constant?"
DeleteGood point. There have been numerous attempts to try to explain away the interpretation of observations involving the cosmological constant through large-scale inhomogeneities, either gravitational lensing (in a very general sense) screwing up the apparent-magnitude--redshift relation, or backreaction (i.e. inhomogeneities influencing the expansion, perhaps even causing acceleration, at least as seen from our point of view).
Possible? Yes. Plausible? Not at all. Why? Because we are asked to believe that, of all the things such effects could do, they just happen to deliver us observations which are perfectly understandable in terms of 1920s cosmology. Not only that, but the parameters so derived agree with those from other tests (local determinations of density, CMB, etc.)
In short, I'll believe it when I see it. The burden of proof is on the one who makes the claim.
The only way to possibly check would be to run a massive computer simulation which actually tracks each and every star in a galaxy and ....sees what general relativity alone would predict.
ReplyDeleteThe danger of any such simulation though is that some subtle bias could get coded into it. Since "everyone" would expect the result to be that "of course DM and DE exist and are necessary...you crank." Which also leads to the danger that if a carefully done program to computationally test this idea found dark matter was fictitious it would never pass peer review.
Maybe in another 20-30 years when all the dark matter searches have finally, firmly, and unequivocally failed to find anything that could be the dark matter we need.
@Hontas
DeletePainful to read: "if a carefully... would never pass peer review"
If that really is the problem then 20, 30 years of patience won't help
"The only way to possibly check would be to run a massive computer simulation which actually tracks each and every star in a galaxy and ....sees what general relativity alone would predict."
DeleteActually this is no more true in GR than it is in, say, modeling the airflow over a wing. You don't need to know about millimeter-scale convection cells to make an Airbus. You just need a smeared-out matter distribution of appropriate density, some initial and boundary conditions on its state of rotation, etc. For the purposes of modeling an entire galaxy, you can treat the individual stars as dust.
-drl
Wrong on all counts:
Delete"The only way to possibly check would be to run a massive computer simulation which actually tracks each and every star in a galaxy and ....sees what general relativity alone would predict."
This will probably always be impossible. I'm reminded of the map in Sylvie and Bruno (look it up) which was very accurate because of its scale of 1:1. But it was so cumbersome that people just used the actual Earth instead of the map.
"The danger of any such simulation though is that some subtle bias could get coded into it. Since "everyone" would expect the result to be that "of course DM and DE exist and are necessary...you crank." Which also leads to the danger that if a carefully done program to computationally test this idea found dark matter was fictitious it would never pass peer review."
On what grounds do you claim that it would never pass peer review? What other conspiracy theories do you believe in?
"Maybe in another 20-30 years when all the dark matter searches have finally, firmly, and unequivocally failed to find anything that could be the dark matter we need."
That is not how science works. Absence of evidence is not evidence of absence. There is a reason why, in sensible courts, one is never asked to prove someone's innocence, as opposed to someone's guilt.
I like this answer so much better than mine.
ReplyDeleteThe nit I used to pick was no dark energy because standard candles aren't, um... as standard as advertised (THEY AREN'T). But saying so is picking a fight with the cosmic distance ladder, ya-know the floor that the entire field sits on, so I just pissed off who ever I'd talk to about it because people outside the field don't get far enough in the conversation to hear that part.
I'll admit having the personal weakness of buying-into dark matter because it's supported by three independent types of measurements:
1.) galactic spin rates,
2.) gravitational lensing maps through galaxy's showing alot of non-point like matter (read matter not in stars),
3.) something tricky about the size of temperature fluctuations in the CMB that I'd have to look up again to explain
But I never bought the dark energy thing I still think that's a "type 1a supernova" aren't all created equal problem.
Anyway here's to hoping I'm wrong and Sabine's right as her proposal is cognitively tractable to the reliant research community where-as mine is decidedly not (yes some cool folk have looked into the Type 1a thing and written papers but no-one seems to be reading them...). Someone should throw a D-wave at Sabine's problem and tell us the answer! this is a good thing to work on!!!
Also I still like the primordial black-holes for dark matter because all that early universe conjecture about it not being possible is too inflation adjacent for my taste. I really hope the planet nine black-hole pan's out cause that will force the issue.
+50 Internets to Sabine
Have you heard of the tired light theory? We know that there is a relationship of redshift with distance, but we can't tell whether the redshifted light is the result of relative velocity or for other reasons, such scattering from the inter-galactic medium. IMO the picture the scientists present of the early universe just doesn't add up. There are too many inconsistencies and discrepancies and they keep finding galaxies that are older than they should be. And don't get me started on inflation.
DeletePerhaps the universe is large and relatively static, rather then expanding and originating with the big bang.
I have heard of tired light but last I read it has already been ruled out by the let's call it "sharpness" of the distant objects (scattering off of inter-galactic hydrogen can only down shift photons by well, scattering them. Which would "fuzzify"tm the images)
DeleteI do like the Big-bang as an explanation for the observed red-shift and the CMB and the arrow of time, but inflation is a bridge too far for me; Mostly because it is explaining a parts-per-billion (or is it trillion) temperature fluctuation at the cost of destabilizing causality which in turn can break conservation of energy... not to mention all of the bolt-on's that inflation forces you to consider to make a non-sense idea work. something something epicycles, doesn't pass the smell test bla bla bla.
The short answer is I'm an engineer and essential uninformed so when a person who has actually worked on the problem(s) calls out an assumption that was used to make the math tractable in earlier less-computationally-abundant-times. Call me interested!
"WHATS THAT SCIENCE? All you need is more flops? WE CAN DO THAT!"
@Jonathan Starr: you forgot 4) motion of galaxies (and plasma) in galaxy clusters.
DeleteAlso, Dark Energy isn't based solely on the distance ladder (which has a lot more in it than 1a SNe); there's BAO (baryon acoustic oscillations), and the CMB. And there are some promising distance ladder independent methods, in ~the same redshift range as 1a's (e.g. standard sirens, lensing timing); not enough data yet to make a difference though.
Why do you think a D-wave (or anything like it) would be helpful?
The size-density space for primordial black holes is - as I understand it - already far too small for them to be anything more than a minor component of dark matter.
And yes, "tired light" ideas have been shown to be inconsistent with observation, not only because of a lack of "fuzziness" but also because the observed redshifts are achromatic (no known scattering process is achromatic).
@JeanTate: Good catch I didn't know about that one. But it seems obvious when you mention it. I will need to see if anyone's looked into it deeply!
DeleteSo the BAO thing is interesting I had not seen that before thank you I might need to re-examine my position on dark energy.
D-wave and all the other "exploit a quantum correlation" machines equate to sci-fi computers to me. I am not sophisticated enough to be able to understand the bounds of what they can do, but by most accounts what-ever the limits are they are higher then any other machine types we have.
I am very interested in a good round up of all the various study's that constrain the size-density space for primordial black holes. Wiki has many citations but they are not collated in a way that I have time to examine. I'm just not going to read 10,000 pages on it; But if there were one paper that sum's them all up... I'd read that.
@Jonathan Starr: quantum computers (QCs), of which D-wave is one, have only recently demonstrated "quantum supremacy" (i.e. proof that they can solve certain problems far faster than any classical computer could), but for only one, very technical problem, and without a long list of caveats, loopholes, etc. Generally, QCs will only ever be able to do "better" for a limited class of problems, such as cracking passwords in most current public key encryption schemes. And even that is still at least a decade away. As far as I know, no one has shown that QCs could - even in principle - do GR calculations faster than any of today's supercomputers could.
DeleteI saw a nice summary of the current "state of play" on black holes as dark matter (primordial or not); I'll see if I can find (and will post a link if I do).
"The nit I used to pick was no dark energy because standard candles aren't, um... as standard as advertised (THEY AREN'T). But saying so is picking a fight with the cosmic distance ladder, ya-know the floor that the entire field sits on, so I just pissed off who ever I'd talk to about it because people outside the field don't get far enough in the conversation to hear that part."
DeleteWho claims that they are standard candles? The claim is that they are standardIZABLE candles.
"But I never bought the dark energy thing I still think that's a "type 1a supernova" aren't all created equal problem."
So you choose to ignore all other evidence as well?
@Jonathan Starr: If you enter "primordial black holes dark matter" (without the quotation marks) into your fave internet search engine, and click on Images, you'll get lots of nice charts which show the limits from various kinds of observations. Of course, you'll need to check the sources to be sure how reliable they are.
DeleteFigure 7 in arXiv:1910.01285 gives an example of recent constraints (but only for a limited range of observations).
Hope this helps.
The title should be: What if we are just using the wrong equations?
ReplyDeleteWhat you have suggests that we should try using the wrong equations
Norman,
DeleteThanks for the suggestion. I was thinking it kind of sounds wrong, but couldn't quite pin it down.
I think a better title would have been "What if we are just using the equations wrong".
DeleteSince you are suggesting we have the right equations (from Einstein), but we don't have the appropriate mathematical methods to solve them for realistic models (i.e. very lumpy on the scales of star systems, galaxies and galactic clusters).
Folks:
ReplyDeleteCould you please stop submitting comments about your favorite explanations for dark matter or dark energy or your personal theories of something.
I strongly suggest you read the comment rules before wasting your and my time with submissions I will not approve.
Das gilt auch für die Deutschen.
Like this one: https://xkcd.com/2216/
Deletesystems of equations folks. the agents are coming...
ReplyDeleteThere is pretty much an exact analogy with fluid flow. The non-linear Navier-Stokes equations can be linearized by throwing out the viscosity. Such an inviscid fluid has some common properties that are correct - but it behaves nothing like real world fluids. Feynman called it "dry water", a term that actually made me laugh out loud when reading the Lectures way back when. Without the viscosity, you are not dealing with realistic fluid behavior.
ReplyDeleteYou have to take a similar approach in GR as you do in fluid dynamics - to set up a specific problem as a whole, by positing a matter distribution in its entirety with proper initial and boundary conditions (e.g. a smeared out thin disk of rotating gas and stars) and retaining enough non-linearity so that one gets a solution that preserves the essential character of GR, and can be directly compared with the Newtonian solution. This idea seems so simple that it is astonishing that it hasn't been really exploited outside the work a few people, as far as I know (Fred Cooperstock and his collaborators. Alas, Cooperstock passed away last year.)
-drl
drl,
DeleteDon't know what you are talking about sorry. There's no parameter in GR that plays the role of the viscosity.
Well, not as such, but there are non-linear terms in the 1st derivatives of the metric that play a very similar role to the viscosity in Navier-Stokes, and have a similar effect on the velocity profile near the boundary. See the work of Cooperstock, the early papers from the 2000s.
DeleteThe important point to be learned is that *any* viscosity of any strength in Navier-Stokes presents a vastly different scenario than *no* viscosity. There is not a smooth dovetailing into inviscid flow near the boundary. To get real world behavior, you must retain the non-linearity. The expected analog in GR is that the velocity profile near the edge of the galaxy is critically dependent on the non-linear terms coming from the 1st derivatives of g_mn, and this is confirmed.
-drl
drl: If you want an analogy for clarity, any non-linear equation will do; for example $2^x$. $avg(2^1+2^3+2^5)=14 \ne 2^{avg(1+3+5)}=8$.
DeleteAveraging the function applied to a set like $2^{X}, X=[1,3,5]$ does not automatically equal applying the function to the average of the set. In this example, if a large number Y dominates the set X, the discrepancy would be approximately $2^Y$ vs $2^{Y/4}$, we could make it as large as we want.
How small scales influence large scales is an important problem in other areas of physics, such as fluid and plasma turbulence. Short-scale structures can for instance enhance dissipation. In general, though, their effect is relatively small.
ReplyDeleteIn cosmology, we are talking about a massive effect, like 95% of the total energy content of the universe. In the examples I cited above, it would be unthinkable that some small-scale eddies dominate the dynaics over the larger-scale structures.
We would need some powerful, but yet unknown, mechanism to allow this in GR.
Incidentally, the final sentence of the paper you cite: "this 'averaging problem' is still unanswered, but it cannot be ignored in an era of precision cosmology", suggest that such averaging effects only have a limited impact (because they would need "precision cosmology" to be detected). But DM and DE are right in your face. You do not need any precision measurement to ascertain the effect of DM in a galaxy. Without it the galaxy would just blow up!
A lot of geometric based physics is just more abstract forms of Gauss-Bonnet theorem. The GB theorem in two dimensions is
ReplyDeleteχ(M) = (1/2π)∫dSK + ∫_∂M kds,
for K the Gaussian curvature and k a geodesic curvature. Here χ(M) is the Euler characteristic of the space. For general relativity this
χ(M) = (1/32π)∫d^nx |Riem|^2 - 4|Ric|^2 + R^2 + ∫_∂Md^{n-1}K,
where the last term is a boundary term. This boundary term for gravity is going to be 2-dim gravity that has a 1/r force on the boundary and which has a log potential on that boundary.
Suppose this boundary is removed, but the topological numbers (Betti numbers etc) as quantum numbers are conserved. This might then mean a number of plausible things. If these boundary topological numbers remain gravitational then there might be this odd 2-dim gravity-like force, but in 3 dimensions of space. This is a feature of MOND gravity. These topological numbers might also be mapped into quantum field degrees of freedom. This boundary from an eternal inflationary perspective could be the boundary of a pocket world that is removed, where the pocket world becomes a topologically complete space as a sphere or a Euclidean R^3 space. So these pocket worlds in effect “bubble off” the inflationary spacetime as independent spaces or spacetimes.
This raises some questions though. The Gauss law for a force F = -k/r is such that an integration over a two sphere of radius R gives ∫Fda = 4πk∫dr = 4πkR and so depends on the radius of the sphere. This would mean the mass content that is the source of this force in the sphere is μ = 4πkR. This is an oddity in some way. If this is matter in the usual sense then it has a density that is ρ ≈ 1/r^2. If this is a field effect of some sort then the source of this gravitational field seems to be itself some sort of field. I am not sure how to interpret that.
Sabine: You sound open-minded enough to maybe(?) discuss a slight "correctional accounting" fix that I have to the vast majority of current thinking on gravity - it may or may not help with the issues you raise in this post, but it might interest you. Please contact me for details, if you want to.
ReplyDeleteDear Dr. Hossenfelder,
ReplyDeleteYou wrote, "Put in a distribution of matter and energy at any one moment of time, and the equations tell you what space and time do in response, and how the matter must move according to this response."
Is it true that the initial velocities of stars and planets are not relevant (in the above statement) at the time (t=0) in question? I am taking the word distribution to mean simply the locations of masses/energies in space, but not their initial velocities.
As a corollary, and purely out of curiosity, do physicists assume that the contents of galaxies are in some sort of steady-state velocity distribution? Or that at any given moment various stellar and planetary masses my be accelerating or decelerating?
Thank you,
Michael
"do physicists assume that the contents of galaxies are in some sort of steady-state velocity distribution? do physicists assume that the contents of galaxies are in some sort of steady-state velocity distribution?"
DeleteNo need to make assumptions, the observed motions of stars and planets is consistent with them being in orbits (generally approximated as elliptical and not necessarily closed), which means that they are always accelerating. Similarly, in galaxy clusters, the motion of most galaxies seems to indicate that these are bound groups of objects. So, they are always accelerating.
@JeanTate,
DeleteMy question was probably poorly worded and it is exceedingly simple to boot. I realize the matter in a rotating galaxy is always accelerating. If it wasn't, it would fly off on a straight line into space, right? What I was curious about, was how those initial velocities were given in the simulations. Clearly the starting point in a model is not perfect stillness right? So are average velocities assigned to average masses? And then the simulation is set loose to perform its calculations? And in terms of non-linearity, would the initial velocity given to the masses in a model also potentially lead to divergent outcomes (as compared to the average)? I am mostly just curious, as it seems an unbelievable task to try and catalog all of the stellar masses in a galaxy, as well as their initial positions and initial velocities.
Thank you,
Michael
@Michael: thanks for the clarifications. It depends on what simulations you are referring to. For example, as far as I know, some such for the solar system begin with today and "run the clock backwards"; in these, "initial" velocities are non-zero. I think - but am not sure - that other solar system simulations start with some sort of accretion disk, made of "dust and gas", one that is in rotational equilibrium (so initial "velocities" are non-zero).
DeleteFor cosmological simulations, some begin with an inhomogeneous distribution of mass that is consistent with observations of the CMB; I do not know if non-zero velocities are part of the initial conditions.
As to whether changes in initial conditions - such as velocity distributions - can affect results (and to what extent): I think this has been well studied, at least for some (e.g. solar system), but cannot point to anything specific (at least, not right now).
Deleteis Jean Tate an alter-ego of Sabine
Without getting very concrete, I have always thought that dark matter and dark energy signalled that there was something missing in the equations we are using. But in view of "The average of a function of a variable is not the same as the function of the average of the variable", which I have never thought of in relation to nonlinear theories like GR, this conviction of mine may be simply wrong. Thanks, Sabine, for thus correcting my thoughts.
ReplyDeleteSabine,
DeleteI also really liked how over the last 5 posts you reflected on non-linearity.
Starting at one scale with a measurement device and QM particles in the superdeterminism post here:
"...non-linear just because if the detector depends on a superposition of prepared states that’s not the same as superposing two measurements."
And now on a cosmological scale spacetime metric and planetary matter distribution:
"But since the gravitational part is non-linear, this will not give you the same equations that we use for the solar system".
Non-linearity and non-commutativity go hand in hand.
I don't think the equations are wrong, I think the understanding is wrong. Einstein described a gravitational field as space that's "neither homogeneous nor isotropic". Space has its vacuum energy, so if a region of space has an energy density greater than the surrounding space, we'll see a gravitational field. And where is this place? Where space has not expanded because it's gravitationally bound. In a galaxy.
ReplyDeleteI think (hopefully someday soon) that we will look back on this era of dark matter and dark energy much as we now look back at the attempts to save the Ptolemaic system with the invention of various epicycles. It seems the field of cosmology is ripe for a Copernicus-like figure to arrive and re-establish a more elegant understanding of the universe - perhaps by simply demonstrating that the non-linear nature of relativistic solutions adequately explains observed behaviour.
ReplyDeleteAs an amateur of cosmology, I have always wonder what could be the non linear effects in GR.
ReplyDeleteIn my field, plasma physics, we have Maxwell coupled with Vlasov. These are simple enough to write down, but it gives you the most unexpected non linear effects. So I can imagine what GR equations could hide.
The problems of Dark Matter and Dark Energy are now known for quite a long time and there is no cue of a solution at all. This should meanwhile be an indication that we need a major change in our understanding of this field of physics. In particular regarding the theory of relativity. There are alternatives of known persons, for instance of the Nobel Prize laureate Hendrik Lorentz.
ReplyDeleteThe former director of the Albert-Einstein-Institute in Potsdam/Germany, Jürgen Ehlers, said once that gravity is the least understood force which we have in present physics. This is in some contrast to the steady appreciations of Einstein’s General Relativity.
But as long as it is treated as a taboo to discuss real alternatives, how can we hope that we will find the true solution. Or to ask this in a different way: which chance would Copernicus have had under our present conditions?
Alternatives and modifications to GR fill the scientific journals every week. There is absolutely no taboo It's just that nobody could so far measure some effect that is predicted by an alternative theory but not by GR.
DeleteMOND does remove the need of DM, but it is not an alternative to GR as it is not relativistic Also, it does not always provide results in accordance with observation.
That's why we are reluctant to change a theory which, so far, has worked well.
It is of course an exhausting work to check again and again new approaches or theories. But I doubt that without the readiness to do this work there will be no solutions.
DeleteRegarding relativity the situation is, however, quite promising. Since the beginning of relativity there was the approach of Lorentz (Nobel Prize laureate) which was – regarding SRT at that time – not only equivalent but in many aspects better than the one of Einstein. The same precision in all results, easier to understand and more closely related to physics. His disadvantage was that he had to make assumptions which seemed speculative at his time (about particles, molecules etc.) but which are in the meantime fully accepted. I never heard of any founded argument against his approach. But tabooed to an extent that even world-known professors failed to find colleagues to talk about this approach.
This is what I meant and this is in my understanding the main reason for missing progress.
@antooneo: Have you yourself done any work to develop Lorentz' ideas? Or searched the literature for experiments or observations - which have already been done - that might test those ideas?
DeleteIf so, please cite your work (e.g. something in arXiv).
If not, why not?
A very good book on cosmology is by Ryan and Shepley Homogenous Relativistic Cosmologies. It is older, dated to the 1970s, but it covers a lot of the basic material. In particular the work on Bianchi type cosmologies is relevant for distributions of matter or energy.
ReplyDeleteQuestions about viscosity abound. Viscosity with material in spacetime generally plays a role when material is dense. This would occur with accretions disks and maybe material in a neutron star. With galaxies you have in interstellar space around 10 hydrogen atoms per cubic meter, so this is treated as a collisionless gas without bulk properties such as viscosity. Take a look at Ryan and Shepley Homogeneous Relativistic Cosmologies for the stress-energy T^{ab} = ρg^{ab} + pU^aU^b that applies to “dust” or non-interacting particles.
ReplyDeleteVerlinde's interesting idea of some large scale gravitation that is fluid-like may introduce viscosity and bulk properties of fluids. Hossenfelder's rather splendid idea of there being a phase between a form of matter or particle and a “field effect” with spacetime is another possibility. These are possible ways that quantum mechanics or quantum fields are related to spacetime, as much as entanglements = spacetime or supersymmetry as an intertwiner between Lorentz symmetry and quantum statistics. Low energy supersymmetry with MSSM and the neutralino as a dark matter particle appears in doubt, and so the door is open to a great deal of possibilities.
The complex non-linearity of GR always niggles me when compared to the linear differential equs. of QM. Heresy as it may be, I believe GR was derived using macroscopic premises and so may be inaccurate. For instance, if gravity is an ensemble effect then the concept of 'gravitational mass' may not be quite right, and so applications to massive distributions of mass (e.g. galaxies) may be showing those innate faults.
ReplyDeleteI think we need to understand the equivalence of acceleration and gravity which Einstein illustrated in the form of an accelerating space ship with a man walking within it. This was a thought experiment by Einstein. I suspect that change in velocity due to acceleration is "recorded" in the particle or body. The effects of acceleration have to be deeply studied, and may hold a key.
ReplyDeleteWhat does this have to do with averaging non-linearities (in GR)?
DeleteI am a little confused by this post. For one thing, both dark matter and dark energy come into play at the lowest curvatures and largest distances. That is not where you would expect compositional non-linearities to crop up. If "gravity" adds up non-linearly, I would expect that to show up close to the composites, not so far away that gravity itself is very weak. I know non-linearities can affect the high and low end, but this is still unexpected.
ReplyDeleteBut a more direct confusion comes from the fact that you yourself have shown that there is a very good fit to the "dark matter" rotation curves using Verlinde's ideas, without using any free parameters.
https://backreaction.blogspot.com/2018/03/modified-gravity-and-radial.html
With a theory that can describe measurements in great detail, without any free parameters, I do not see why we should instead look at possible ignored non-linearities in GR?
Because those non-linearities are a loophole; unless and until they are investigated, we cannot be certain that they can, in fact, be ignored. Think of work on these as "crossing t's and dotting i's" (I don't know if there's a comparable idiom in German), if nothing else.
DeleteThe true source/true nature of gravitation is unknown as yet. Getting closer to what gravitation is really like is not a nightmare but scientific progress.
ReplyDeleteVery thoughtful analysis.
ReplyDeleteHave you considered that the Universe is ultimately unknowable to the human mind? All our science is merely kitchen recipes to manipulate matter to our advantage. There will always be phenomena that elude our models.
This is such a compelling article. I would imagine that some statement could be made about whether the qualitative effects of nonlinearity in GR match the measurements we ascribe to dark matter/energy? For example, could a statement be made about whether making some sort of effort (even one just predicting trends) to account for nonlinearities would predict any change in the speed of galaxies versus clusters of galaxies?
ReplyDeleteThe perihelion rotation of Mercury was described before Einstein by Paul Gerber with exactly the same formula that Einstein later used. However, Gerber could derive this formula without "relativity".
ReplyDeleteMaybe the whole rest of the world could be described very well, even better, without Einstein's GR?
https://en.wikipedia.org/wiki/Paul_Gerber
It is not surprising that first-order effects can be arrived at in several ways. It's not the formula itself that is convincing, it is the theoretical background that produced it. Another example of the right formula derived by the wrong method was Sommerfeld's treatment of the fine structure of hydrogen spectra. A third is the Lorentz idea of the physically ether-deformable electron, which brought forth the Lorentz contraction before relativity existed.
Delete-drl
@weristdas: "Maybe the whole rest of the world could be described very well, even better, without Einstein's GR?"
DeleteIndeed, maybe it could.
But ideas are cheap; even the Red (White?) Queen could think of six impossible things before breakfast (Alice in Wonderland).
The hard part is converting those impossible ideas into theories, models, and hypotheses, and then testing them to see to what extent they are consistent with all relevant (objective, independently verified) observational data and experimental results. Gerber's ideas clearly fail these tests.
Maybe you'd like to try to develop a serious alternative to GR?
JeanTate3:28 PM, October 17, 2019
DeleteAny arguments?
Who has tested Gerbers derivation from the finite propagation of gravitational forces? Where are the papers, links?
And no, I have not to und don't wont to develop an alternative to GR. I'm only showing its poorness. That's enough for me.
@weristdas: The Wikipedia page you provided a link to earlier contains more than enough to show the failure of Gerber's ideas. If you'd like to understand what's already there in more detail, may I suggest posting to a different forum? A couple of suggestions: Physics Forums, and International Skeptics Forum (Science, Mathematics, Medicine, and Technology section). Both are far more suitable than this Blogger one, if only because they allow LaTeX.
DeleteJeanTate10:09 AM, October 19, 2019
Delete"The Wikipedia page you provided a link to earlier contains more than enough to show the failure of Gerber's ideas."
No. There are nothing more than allegations - and proofs which show that Gerber would have faulted - which bases on the alleged truth of GR: A circulus vitiosus!
Actually it is impossible to do justice to people which had been in dissence to Einstein. The "Zeitgeist" makes it impossible.
@weristdas: sorry, I have zero interest in pursuing this sort of conspiracy theory any further, here. However, if you do open a new discussion thread in Physics Forums (or similar), and start with something concrete (such as an equation), I would be interested in participating.
DeleteHi Bee,
ReplyDeletewhat's your opinion on Alexandre Deur's theories on Dark Matter? https://arxiv.org/pdf/1909.00095.pdf They seem pertinent to this post.
Very interesting, this is a second, independent confirmation of Cooperstock's original idea. Deur does not reference him, that is a situation not unprecedented in science, when a key new idea emerges independently among researchers. I am convinced that the rotation curve anomalies originate in the non-linearity (self-interaction in Deur's words) of GR.
Delete-drl
Curious that Deur et al. do not mention gravitational lensing, nor the CMB data which is consistent with Dark Matter being a form of (cold) mass (except, perhaps, very obliquely). They do mention "galaxy cluster dynamics" and "the correlation between the missing mass of elliptical galaxies and their intrinsic ellipticities", in the Introduction, but do not refer to them otherwise.
DeleteIMO the lensing evidence is far more tenuous, and indeed if GR is being applied in too glib a manner, all of that is in a cocked hat and needs to be re-evaluated.
Delete-drl
Because they make computations that make GR similar to QCD. The mass generated is not a mere artifact of coordinates, but they are real.
Delete@drl: well, IMO the lensing evidence is arguably the best.
DeleteBut why are we even discussing this? I mean, I have said that I think lensing, in GR, may be among the easiest cases to work on, with respect to understanding averaging non-linearities; how about we discuss that?
To recap: in gravitational lensing, there are both position and timing phenomena. Lensing by a ~spherically symmetric object (i.e. a star) has been observed. Surely it would be relatively easy and straight-forward to investigate averaging non-linearities in models with nested spheres, say, or a system of two such objects (e.g. a binary star system)?
Fascinating! Questions that I've been asked are:
ReplyDeleteIs the measured expansion of the universe related to Entropy?
What effect does the whole load spectrum of photons traveling through free space have on how we measure it?
I usually answer. "Not as entropy was taught to me! Not likely any effect but how would you measure it anyway? "
In truth you have pointed out a problem related to General Relativity which not addressed in popular literature. Thank you!
The entropy of a system of particles, say a gas etc, is covariant constant with GR dynamics. However, for an observer witnessing the universe at large things are more subtle. This is a because this observer is not witnessing a system that is under uniform parallel translation or geodesic motion. With the expansion of the universe particle cross the cosmic event horizon. That observer can only ever witness information from a galaxy in the time period before this crossing. This restricts the amount of information that can be observed. That is entropy increase.
DeleteThank you for your analysis! Your succinct summery has helped me rethink the problem.
DeleteSabine,
ReplyDeleteYou wrote,
"Einstein’s theory of general relativity is an extremely well-confirmed theory. Countless experiments have shown that its predictions for our solar system agree with observation to utmost accuracy."
However I wonder what that means exactly. I mean, isn't most of this agreement due to the precision of the Newtonian approximation within the solar system, or indeed the Newtonian approximation plus some SR effects? How much of this agreement is contributed by GR as such?
I mean to really test GR wouldn't you need a 'laboratory' containing test masses in the form of black holes?
Tests are considerably beyond the solar system. The orbits of neutron stars are known to geodetic precision and their change due to gravitational wave emissions measured to GR prediction. This was the Hulst-Taylor result in the 1970s. The precession of a gyroscope in a rotating gravitational field has been measured by Gravity-B. Then the latest has been the detection of gravitational radiation. General relativity is a well confirmed classical field theory.
Delete@David Bailey: to add to what Lawrence Crowell wrote (typo, it's "Hulse-Taylor"). There's also gravitational lensing, by stars. One particularly interesting case is a Hubble image, taken years after the micro-lensing event, of the foreground star (the lens) as a separate object from the background one (the object being lensed). Parameters derived from analysis of the lensing event have values consistent with those derived from later observations of the two stars as separate objects.
DeleteThanks for the typo and I might as well just pre-thanks for the future. I write this as Hulst-Taylor almost every time --- it is some sort of habit. I can almost guarantee I will make this mistake again, and again and ... .
DeleteThe lensing of light measured by Eddington's expedition in 1919 is the cornerstone of gravitational lensing. The very precise inferred mass of the sun makes this firm. There is also spacecraft data of radar time vs ruler measurements of geodetic distance.
If you read my previous comment I made to you it is possible to see that if gravity is the centripetal force holding the universe together then dark energy is the centrifugal force pushing it apart. As vast as the universe is it could have negligible, undetectable spin to create this force. As for dark matter, if you add both electromagnetism and gravity together as a whole, realize that some matter can radiate energy at speeds greater than the speed of light but still have a gravitational effect. This is because of the two temporal dimensions.
ReplyDeleteHave you written up your ideas in the form of a paper, and posted it to arXiv's gr-qc section? If so, a cite please.
Delete
ReplyDeleteand Dr. Massimo Villata suggests a hypothesis (CERN-Atlas H-antiH, 2017 experiment)
how the energy of universal expansion is continually born ...
http://iopscience.iop.org/article/10.1209/0295-5075/94/20001/fulltext/
Can you please summarize how Dr. Massimo Villata addresses the issue of averaging non-linear equations?
DeleteIf Sabine were really serious about what might be found by checking for mathematical or conceptual errors she would take a look at the very minor modification that Huseyin Yilmaz made to general relativity. It eliminates dark energy, singularities and event horizons with only a change of attitude about what might constitute a source of spacetime curvature. A good place to start, which would require only ten minutes would be here: Prespacetime Journal June 2019 Volume 10 Issue 5 pp. 621-626
ReplyDeleteRobertson, S. L., Comments on a Concept from General Relativity
and for dark energy see: arXiv:1507.07809
Can you please summarize how Huseyin Yilmaz addresses the issue of averaging non-linear equations in this "very minor modification"?
DeleteThe Yilmaz theory is somewhat like the Brans-Dicke theory of scalar-tensor gravity, only instead of adding new fields, you add phenomenological terms to the stress tensor on the right side. Nothing wrong with this, but it is ad-hoc and ill-defined. Various proposed additions were disproven outside of the contexts for which they were invoked. Can't remember details. Other examples are the Born-Infeld and Mie modifications to electrodynamics. These attempts are interesting but always too inchoate to become well-defined alternatives to GR.
Delete-drl
Quantum Inertia can be used to explain galaxy rotation and several other phenomena such as wide binaries. I know its controversial but it is in its early stages of being developed. I would be interested if anyone knows of a "fatal flaw" in the theory.
ReplyDeleteIt's not "controversial", it's nonsense. But maybe there are some insights from that nonsense which might be helpful to the topic of this blogpost (you know, averaging non-linear equations, specifically in GR)? Can you point to any such?
DeleteThis makes me wonder if this idea should be considered while studying further the differences between galaxies with little dark matter and those with a lot. What large-scale properties do the nearly-dark-matter-free galaxies have in common, that are different from other galaxies? Maybe look at properties like virial mass versus net angular momentum, in case some kind of frame-dragging effect could come out of GR on galactic scales.
ReplyDeleteInteresting idea. How do you think work done on averaging non-linear equations (in GR) could address this?
DeleteTopher,
DeleteInteresting, but frame dragging is not a nonlinear effect. You get it from linear approximations of GR, such as gravitoelectromagnetism
I don't know if they take into account that the further away you are the faster you are moving and the more your mass is. Maybe that is all factor into the solution.
ReplyDeleteBy "they", you mean the scientists who study this? If so, then yes, they do.
DeleteBut what does this have to do with averaging non-linear equations?
If nature is not written in "nice" equations - like some LaTeX/Math expressions staring at you on a page - then to continue to search for nice equations for everything is a waste of time, time better spent on other systems of expression.
ReplyDeleteWell, so far the approach of looking for equations in nature has produced 1x10^9 the results of the alternative, so place your bets accordingly.
DeleteCouldn't dark energy/matter be just a thermal noise of fields of interactions?
ReplyDeleteWe observe 2.7K thermal noise of EM interaction, but there are also other interactions: weak, strong, gravity - shouldn't fields corresponding to these other interactions also contain thermal noise e.g. 2.7K as EM background radiation?
What energy density could be explained this way?
Interesting ... but what does it have to do with averaging non-linear equations?
DeleteFor Einstein an experiments, look up the Einstein-de Haas effect
Deletedrl1:36 PM, October 17, 2019
ReplyDelete"It is not surprising that first-order effects can be arrived at in several ways."
I don't know what "first-order effects" means in that concern, for deriving right formulas for wrong reasons, as you claimed.
It is quite sure that Einstein had known Gerbers formula. Einstein's telling reaction on Ernst Gehrickes critique was striking.
Einstein is more a founder of a religion than a scientist. Einstein has never ever conducted an experiment! We talk about physics! The experimental science per se! This fact should be alarming! Seemingly nobody bothers about that fact. Einstein is sacrosanct, a superstar, untouchable. Just only this observable fact should make anybody skeptical who is able to think on its own.
The perihelion precession of the orbit of Mercury is a first-order add-on generated by GR to the Newtonian orbit. For the other planets, it is too small to measure. It works for Mercury because it's close to the Sun, and has a highly elliptical orbit. Other first-order effects, some of which are correctly predicted by alternates to GR, are the displacement of stars near the limb, and the redshift of the solar spectrum. Only GR gets all of them right. And it reduces to Newton in the right limit. Case closed, GR is correct in this regime.
Delete-drl
@drl: "For the other planets, it [perihelion precession] is too small to measure." Um, are you sure?
DeleteIt was too small to measure a century ago, but certainly not today. If I remember correctly, this effect has been measured in the orbits of Venus (despite it being almost circular), Eros (the asteroid), Earth, and Mars. Thanks in no small part to transponders on space probes ...
"Einstein has never ever conducted an experiment!"
DeleteFactually incorrect. Einstein did in fact collaborate (with Lorentz' nephew?) on an experiment ... which is not often talked about, because it went embarrassingly wrong.
The experiment is now chiefly remembered as a classic example of why theorists //should not// attempt to carry out experimental tests of their own predictions.
Right :) It is fortunate that Mercury had an eccentric orbit. The perihelion precession of anything but Mercury would have been impossible to measure. The discrepancy drove people into crazy ideas like a shadow planet, Vulcan, near the Sun and perturbing the orbit of Mercury. Had the precession been within expected errors, no one would have worried much about making a Lorentz-invariant Newtonian gravitation. It would have taken the form eventually assumed by Nordström's scalar theory of gravitation, which gives the same result for the gravitational redshift as GR. That would have been regarded as conclusive proof that it was right. No one would have suspected the displacement of stars near the limb, which the scalar theory gives as "none", and so not looked for it, and the tensor nature of GR would have remained undiscovered until spacecraft began to accumulate anomalies.
Delete-drl
drl10:16 PM, October 19, 2019
DeleteI don't see any necessity why the Mercury orbit should be explained with the same formulas as the redshift of light. The redshift of light, which in first order correlates with the masses of the stars, may be a consequence of some kind of "tired light" (e.g. a kind of "stokes shift" due to absorption and desorption processes in the surrounding of the stars, where the media ("atmosphere" of the star) is as more denser as the mass of the star is heavier).
There is no necessity to describe gravitational influenced movement and behavior of light with the same formulas. You only would like that. But the nature have not to like the same what you like.
@weristdas: re "The redshift of light ... may be a consequence of some kind of "tired light""
DeleteIndeed it might.
But ideas are cheap; even the Red (White?) Queen could think of six impossible things before breakfast (Alice in Wonderland).
The hard part is converting those impossible ideas into theories, models, and hypotheses, and then testing them to see to what extent they are consistent with all relevant (objective, independently verified) observational data and experimental results.
Maybe you'd like to try to develop this idea of yours, to the point of it being potentially testable? Otherwise, what's the point?
So you have alternative a), which predicts both perihelion procession and redshift with a single equation, and applies to situations that weren't known when the equation was written, and alternative b), which has two different explanations, both of which require things that have never been observed and have made no correct predictions beyond the original data that fed into them.
DeleteWhich explanation should we adopt?
Everyone who is complaining now their comments do not appear: This blog is not a pin board for self-made dark matter theories. If you have something to say about averaging non-linear equations, you are welcome. If you want to discuss anything else, please find some other place. Thank you,
ReplyDeleteSabine
Hi Sabine,
DeleteI think you are knowing it by yourself. The wave function psi(x_vec,t) is highly non-linear. The born rule is responsible for the averaging process ;-)
Does this as a general point also mean the reference to known persons, who are presently not in the focus of main stream? Like referring to Hendrik Lorentz rather Albert Einstein regarding relativity?
Delete*rolls eyes*
DeleteCan you please put this into words?
DeleteI find it meanwhile difficult to see, where your limit of acceptance is in the area between being at the boundary of main stream and/or addressing really new approaches and ideas. Some help here would be appreciated.
I guess I don't understand WHY physicists think averaging is valid. The average of squared values does not equal the square of their average, those can be arbitrarily distant. A fourth grader can understand that. So why would anyone think the latter is an okay approximation for the former?
ReplyDeleteNevertheless, I have read Newton proved his gravitational laws worked if the sun and planets were considered as point masses instead of spheres.
If so, I should think the obvious conclusion is that if mass is NOT distributed in a perfectly uniform sphere, then the generalization is being applied outside the bounds of its proof, and may no longer hold. At least not without additional mathematical proof. But Sabine seems to be saying nothing has ever been proved here.
So why didn't peers dismiss this out of hand? That's a serious question, what justification did they have?
A lot of physics is about "assume a spherical cow." This is in particular when you model systems. Physics at the core foundations is less of this variety. When you model systems though you have to make these simplifying assumptions or else things just get too complex to manage.
DeleteIs a model in the form of a deep neural network a simple or a complex thing?
Deletee.g.
CosmoFlow: Using Deep Learning to Learn the Universe at Scale
arXiv:1808.04728 [astro-ph.CO]
Very basic question: Can somebody provide a link to a web source on how to measure the rotational speed of stars is a remote galaxy? I find it very fascinating that this is possible and I can't find anything because all sources focus straight on the dark matter problem.
ReplyDeleteI think you may mean the estimated speed of stars, in galaxies other than our own, with respect to the galaxy's nucleus (not the rotation speed of the stars themselves, around their axes). If so, I'm not aware of something that meets your exact requirements. Perhaps this, from JILA: https://jila.colorado.edu/~pja/astr3830/lecture17.pdf
DeleteTo take one method, long slit spectroscopy: place the slit of the spectroscope across the long axis of a spiral galaxy. The wavelengths of the emission lines, especially H-alpha and [OIII] - relative to the nucleus - interpreted as line-of-sight Doppler give the rotation curve for the galaxy (when corrected for the inclination).
What I was always wondering if there is some intermediate theory of gravity between Newton and Einstein, something that is nonlinear but more tractable than Einstein equations. A good analogy is found in hydrodynamics where for shallow water Navier-Stokes is replaced by Korteweg-de Vries, which is much better understood. I would guess that various length scales needed for expansions of Einstein equations would be constructable from curvature invariants or mass measures (ADM, Bartnik etc.).
ReplyDeleteGood question and the answer is yes. Cooperstock retained the absolute minimal amount of non-linearity in his study of the problem of the galactic disk. Example paper.. look for more on arxiv.org.
Deletehttps://arxiv.org/abs/astro-ph/0507619
-drl
Julius: you could try an acoustic metric using NM equations. That'll give you nonlinearity, plus classical Hawking radiation. It's a significant step forward from Newton, but it's not quite the same as Einstein, and is a different category of theory to or most of the textbook examples that are essentially GR1916 with extra bits tacked on.
DeleteIgnorant question.
ReplyDeleteThis is presented as though modelled systems lie somewhere in a nebulous region.
But don't physicists use simplifications to establish bounds??
Simple models would establish upper limits to energy or mass densities? This would be knowledge.
Maybe, You should dig into Q-bit language/mechanics ... instead of persist into algebraic continuum linear and non linear functions represented in Post Cartesian vector/coordinate systems .. and its consequential Topological Abstractions in "imaginary Space" ...
ReplyDeleteHow The fuck do you pretend to build accurate models by using post factual hieroglyphs appointing to anthropic myths of balance and Equality???
By that, You're stuck seeking mirrored symmetries .... in things that do not require symmetries to exist.
Sabine,
ReplyDeleteGalaxies rotate faster than expected. Galaxies in clusters move faster than they should.
Those are assertions from a mathematicism framework. The rotation curves of galaxies and the dynamic behavior of galactic clusters are what they are. It is the job of theoretical physics to concoct models that conform to that observed behavior.
Unfortunately, modern theoreticians content themselves, for the most part, with claiming that their failed predictions are not a consequence of their inadequate model, but rather the result of a "hidden variable" called dark matter that has proven impervious to direct detection. As usual, you see the problem clearly:
I don’t mean that general relativity needs to be modified. I mean that we incorrectly use the equations of general relativity to begin with.
In the case of galactic rotation curves, GR isn't even used. It was the inappropriate use of the Keplerian method and subsequently, its Newtonian variant, the shell theorem, that produces erroneous expectation values for galactic rotation curves.
A typical disk galaxy is a morphologically complex system that bears scant resemblance to the structurally simple solar system. Deploying the mathematical conveniences of Kepler and Newton that are appropriate to the solar system structure, to galactic structures, cannot be justified on straightforward physical considerations. There is no missing matter in the cosmos, there is only an abject analytical failure of model construction.
This was a failure of qualitative analysis. It does not, in any sense, represent a failure of GR, since GR was not used to determine the expected rotation curves. @drl has already mentioned the work of Fred Cooperstock; here is a link to a relevant paper: https://arxiv.org/abs/astro-ph/0507619
In his famous 1937 paper On The Masses Of Nebulae And Clusters Of Nebulae Zwicky paid considerable attention to the effect of internal viscosity on the rotation curves. Unfortunately that physical insight was apparently set aside for specious mathematical reasons. Galactic internal viscosity would simply be a manifestation of disk self-gravity which GR can handle quite nicely, as Cooperstock's paper demonstrates.
By their very nature, Keplerian-Newtonian analyses are blind to the effects of disk self-gravity. Dark matter is just an error-perpetuating modification to a failed model.
Yilmaz never attempted to average nonlinear equations and would have considered it to be the wrong approach. There is nothing ad-hoc about considering a gravitational field to have a real energy density that can contribute as a source term in the Einstein field equations. This results in an exponential metric for a central mass source with a Newtonian potential for the exponential function argument. It passes all of the known solar system tests of general relativity. It has no event horizon. A test particle in this metric has an innermost marginally stable circular orbit that agrees with astrophysical observations as well or better than the Schwarzschild metric. The shadow of an object compact enough to live inside its photon sphere is only four percent larger than in the Schwarzschild metric. In short, there is no reason to keep using a metric for astrophysics that has singularities. It is not needed for observational reasons.
ReplyDeleteBy separating gravity from pure geometry and allowing matter free space to have a field energy density, there is no inherent reason for an inability to quantize a gravitational field. In my opinion, it will be impossible to cure the problems with general relativity as long as everyone believes that empty space free of electromagnetic or chromodynamic fields or cosmological constants requires the vanishing of the Einstein tensor. Relativity theorists need to wrap their minds around that.
The cusp problem with dark matter might be a numerical artifact. https://arxiv.org/abs/1808.03088v2 More informal article on this here:
ReplyDeletehttps://www.forbes.com/sites/startswithabang/2019/10/18/dark-matters-biggest-problem-might-simply-be-a-numerical-error/#68029e348979
The work of fully calculating all nonlinearities of a big cluster of nonuniform masses should start with a first step. One probably could be able to have a look at the direction to which the better solvation of the nonlinear equations lead by starting with a first step as it is done in infinitesimal calculus. Probably one can gain the direction into which these nonlinearities lead. This direction could point into the explanation of dark matter - or not.
ReplyDeletef(x)=x^2 is an easy nonlinear equation. For x>0, the average of f will be aways greater than the function of the average.
How is it for dark matter/ galaxies/ GR? => Does the calculation of the gravitational force/field of two masses (instead of the calculation of only one central mass) lead to a higher gravitational field around the two masses, Sabine?
Addendum: dark matter is a phenomenon where gravitational forces as calculated by GR are to weak.
ReplyDeleteFor the dark energy phenomenon, the gravitational forces become weaker with time. Not really opposite effects, but somehow separated... Does this thought help?
Still thinking it could help to look at the directions of all these effects...
And, another question, could the curvature scalar fall below zero at the boundary of a weak gravitational field?
The scientific methodology and validity in a handful of scientific fields have deteriorated, especially in a complicated field as theoretical physics.
ReplyDeleteThe field of theoretical physics have lost touch with empiricism and deriving direct phenomena from direct observation. Dark matter and dark energy are indirect phenomena that relies on a lot of other assumptions to be correct. The big bang theory is a fairy tale in its advanced long chain of assumptions.
There is no synergy. The whole concept of dark matter and dark energy is derived directly from the errors of a base model, and these phenomena are un-synergetic, that is that they do not point unequivocally in the same direction as the rest of a theory, but work against the grain of existing observation in order to keep a complex picture valid.
You do not need any dark matter here on earth. Why should you then have dark matter as a unique "fix" on the stuff out in space. It is just like the Greeks and the theory of four elements. When they couldn't use the element theory on the stars, they invented a fifth "heavenly element" in order to keep their explanation model intact. Dark matter is such a fifth "heavenly element".
There will be a paradigm change in physics, but hopefully we will get there without a prolonged "dark age".
We need voices like yours, Sabine, to be more critical and to bring light back to a proper and solid method of science.
Yours truly, Olav Thorsen
Agree, ... Stick to the equations is the contemporary analogous to the Middle Age Scholastics Dogma of Stick to Scriptures ...
DeleteThe empirical synergy is given in the relationship between Technology and building reverse engineering models for improving and explaining the Technology Mechanics ...
But Some Contemporary Theorists seems to support that Ancient desire about decoding "God's Mind" with Geometry and Mathematics ... and because Mass media and Social Reinforcement promotes the view of The Great Genius been praised by everybody because He or She had find The Mystical Equations that compress Energy/Matter into a Rational Abstraction that remains to everybody A Sublime Order and Transcendental Unity ...
The Shadows from the Obscurantist Middle Ages are already shining in the Contemporary Culture and the Mainstream Academic Circles ...
Science don't require more Einsteins and Wittens decoding "God's Mind" in endless Mathematical Masturbations ... Science requires just more smart Reverse Engineers trying to model and understand the current state of The art Technologies ...
Sorry, do whatever you want ... I am Not here to patronize religious people devoted to a Divine Cosmic Mathematician that supposedly provide Structure and Existence to Nothingness ...
You are Free to Dream ...
But Technology and Reverse Engineering preserves Modeling Abstractions in an Empirical Discourse plagued of Facts and Objective Evidences ...
It is fun to wake up and confront Reality .... but too, to Sleep and Dream is comfort in Ignorance ... Ignorance is not always, something unhealthy ...
... let the dreaming speculators, make us to Laugh ...
https://youtu.be/FYJ1dbyDcrI
To compare the modern scientific community with scholasticism of the high to late middle ages is wrong. Scholasticism came about from Aquinas who with learning Aristotle wove Aristotle into theological thinking. The concept of truth and reality in the bible is not at all scientific. In the Jewish Tanach there is Proverbs ch 3 has:
DeleteTrust in the LORD with all your heart and lean not on your own understanding
and in the more explicitly Christian texts Paul wrote, though more likely others who wrote in Paul's name, in Hebrews 11
Now faith is the substance of things hoped for, the evidence of things not seen.
The scriptural idea then is that Truth is something from the authority of God. We mere humans can't derive anything as true which contradicts “The Truth” as revealed by God.
Plato wrote Euthyphro which is an account of the arguments that Socrates had with Euthyphro. Euthyphro raised the question on whether what is true, real and moral is due to something the gods, or God, must adhere to or whether these things are so because God orders it so. If it is the first of these, then God is constrained to be obedient to something outside of Him, Her or IT (depending on your preference) which makes God not all powerful. On the other hand if God commands truth, reality and what is morally good, then these things are derived from a pure whim or will of a being and as such are not concrete. This is then a dilemma of sorts and points to the prospect the idea of God is absurd.
It is funny that the 5th century BCE Hellenic philosophers had figured this out, but the idea of divine authority mastering up Truth conquered the ancient world in the late Roman period 8 centuries later. This argument on Truth as a matter of divine authority keeps getting repeated to this very day. In the middle ages the ancient Judaic cosmology, which is really a few references to Sumerian cosmology, was “refurbished” into the Ptolemaic cosmology. This imploded entirely with Copernicus, Galileo and Kepler. Other ideas about the natural world derived theologically have been swept away. This is in spite of popular ideas of creationism and a growing bizarre trend towards a flat Earth idea. The ancient Sumerian cosmology was a flat Earth covered by an iron dome, which is a biblical reference used by the Israelis for their anti-missile system “Iron Dome.”
Modern science is not tied to theology. Of course there have been trends in the past where theoretical work build castles in the air or Jonathan Swift's Laputia. However, these come crashing down because they are derived within a context of empiricism. We may see this with supersymmetric standard model work. Dark matter may have a similar departure. People uphold these ideas in academia not because of any theology, but because they do not want their resume to reflect work that ultimately failed. It is not about anything divine, but more careerism.
There will be a paradigm change in physics, but hopefully we will get there without a prolonged "dark age".
DeleteAmen to that brother.
@Olav Thorsen, @First Name Surname: What, specifically, do you suggest scientists like Sabine, Peter Shor, etc actually do differently?
DeleteConcretely, when they get to their offices tomorrow (Monday) morning, what should they do differently from what they did last Monday (etc)?
In other words: what you wrote may (or may not) be interesting, and even, perhaps, insightful. But it is - to me at least - utterly impractical. And the central point of this blogpost is to ask for practical ideas about a specific problem (averaging non-linear equations).
The nightmare may be over. Recent observations show that those galaxies people thought didn't have dark matter seem to really not have it ... this makes it much harder for non-dark-matter theories to match observations.
ReplyDeleteI give it a 99.99% chance that the other side of this argument will not agree with Dokkum that "this is definitive". Also, I encourage you to have a look at the actual data under debate here. That will help you put the claim in perspective.
DeleteI don't really understand the techniques used in their paper, but the big graph in FIgure 4 looks pretty convincing.
DeleteThis could go back and forth for a while.
DeleteI found the following in the Astronomy site interesting, "[DF4 and DF2] point to an alternative channel for building galaxies — and they even raise the question whether we understand what a galaxy is," van Dokkum says. Whether dark matter is a fairly standard particle or whether it is something entirely different, maybe some new field effect involving gravitation, DF4 and DF2 could just represent cases of galaxies which occur without this. The leading contender for dark matter was the neutralino, a condensate of supersymmetric partners, such as the wino or SUSY pair of the W^0. The weakly interacting theory of DM now looks to be somewhat in doubt.
Peter Shor,
Delete...those galaxies people thought didn't have dark matter seem to really not have it ... this makes it much harder for non-dark-matter theories to match observations.
So, do I understand you correctly, that because a couple of atypical, low mass, low-surface-brightness galaxies exhibit rotation curves that are successfully predicted by the standard classical approach, which fails (requires dark matter) in all cases involving more typical galaxies, I'm supposed to conclude that the empirically-baseless dark matter hypothesis is somehow vindicated? Really? That's breathtakingly illogical.
Non-dark matter models agree with observations, in as much as, dark matter is not an observed phenomenon, merely a model dependent inference. It is dark matter theories that do not match observations. The nightmare is only beginning for dark matter's believers.
bu rap, I think you are missing the point. If the equations or the approximations are not accurate then there should be a discrepancy with all of the galaxies. The fact that there are some that follow the predictions, suggests that the problem is not in the predictions, but in the input. The missing bit is the dark matter.
Deletebud rap: If a couple of atypical, low mass, low-surface-brightness galaxies exhibit rotation curves smaller than other galaxies the same size (something that, as Sabine says, may not be completely established yet), then non-dark-matter models don't agree with all observations.
Deletestor,
DeleteIt is precisely that argument that I was referring to as breathtakingly illogical.
@Sabine, Peter Shor, Lawrence, bud rap, and stor: The study of low surface brightness (LSB) galaxies is in its infancy. It's not that many years ago that they were essentially unknown to astronomers, much less astrophysicists. Even today, only a few are known (compared with the numbers of known dwarf ellipticals or spheroids, say), and van Dokkum is a pioneer in terms of efforts to find them (in the optical at least).
DeleteWhatever theoretical explanations are proposed for 'dark matter' (the placeholder), there are plenty of 'unusual' astronomical objects to test ideas on, especially 'formation and evolution' ones. Such as globular clusters, HVCs (high velocity clouds, check out WP), UCDs (ultra-compact dwarfs, ditto), and so on.
Peter Shor,
Delete...then non-dark-matter models don't agree with all observations.
That does not logically follow from the conditional at all. If the non-dark matter model is based on GR then it would reduce to the Newtonian in a low mass regime.
bud rap,
DeleteI don't get your point. If there is not dark matter, but it is the law of gravity that is different, then how can there be exeptions?
@ JeanTate et al. These dark galaxies, which if I recall were 20 years ago or more referred to as ghost galaxies, are rather new. I am not well enough spun up on galactic astrophysics to say for sure what this tells us about ΛCDM with certainty. There has been a lot of phenomenological work on how perturbations in the distribution of matter and radiation in the universe after the first few seconds resulted in luminous matter (that which electromagnetically interacts) dissipated energy in regions with more matter density, and as such this lead to the formation of galaxies in regions with more dark matter. Does this rule out in ΛCDM the occurrence of luminous matter with no or little dark matter? This is what these odd ball galaxies, such as DF4 and DF2, appear to be.
DeleteI see little fundamental reason why within any context, dark matter as a gravity bound gas made of unknown and weakly or non-interacting particles or as some field effect with gravitation, why luminous matter could not accumulate in denser regions outside of DM. A strict MOND theory as a modification of general relativity with very small accelerations or large distance and with mass-energy sources is potentially ruled out if DF4 and DF2 do not have DM.
I posted last night one of Siegel's blog articles on dark matter. His posts are usually pretty will on the mark. It is at:
ReplyDeletehttps://www.forbes.com/sites/startswithabang/2019/10/18/dark-matters-biggest-problem-might-simply-be-a-numerical-error/#68029e348979
and is based on the article:
https://arxiv.org/abs/1808.03088v2
Siegel makes a comparison with Fourier analysis. Consider a Fourier sum that in the limit as the number of higher frequency terms diverges approximates a square wave. In this limit in a purely mathematical situation there are these odd overshoots that occur. In this way the cusp problem with dark matter, which are odd inhomogeneities that occur in ΛCDM when the numerics approaches high ℓ harmonic terms. The CMB has these anisotropies that are expanded in a summation of polynomial terms called Legendre functions. These correspond to terms in the ΛCDM model of higher order and that are increasingly nonlinear. This problem with cusps means ΛCDM is not able to duplicate dark matter on the scale of galaxies.
We might then ask a question whether this series is at all physical. A related question would be whether square wave pulses really exist at all. A square wave pulse has a perfectly vertical wave front that is not C^∞, or it is continuous but no differentiable. The Fourier sequence for a large and finite number of terms begins to generate this overshoot. What physically really seems to exist are a finite cut-off of Fourier summations approximating the square wave. The square wave is a sort of “fiction.” We might ask whether this result with the cusp problem is similar. The cusp might be telling us a bit more than this is just a problem with codes. There might really be some departure between what is ontological and what is being worked phenomenologically.
Siegel's article is typical of his "science" writing. He is a garden-variety mathematicist whose inability to think about the cosmos without the crutch of the failed LCDM model is self-evident. Here is the tell:
DeleteUnsurprisingly, the only problems for dark matter in cosmology occur on cosmically small scales: far into the non-linear regime of evolution.
So, in the hermetically sealed realm of the mathematicist, the complete absence of any empirical evidence for the existence of dark matter, on any scale, does not in any way constitute a problem for the dark matter hypothesis? That's pathetic, not-to-mention unscientific, pap.
Siegel's Forbes articles are reliably good demonstrations of just how far off the scientific rails modern theoretical physics has plunged, as it careens wildly into the thickets of metaphysics, in pursuit of empirically-baseless mathematical speculations that have no compelling physical motivation other than to sustain unfounded beliefs, and the "science" careers those beliefs, in turn, sustain. Modern theoretical physics is a mess.
@Bud Rap: Would you please either put up or shut up? (I hope this is acceptable to Sabine; if not, I'll post again with something more anodyne).
DeleteAs Lawrence Crowell notes, the Forbes article is based on arXiv:1808.03088v2, which actually addresses the point of this blogpost (averaging non-linear equations) in some way; your comment seems to completely ignore this.
In your perfect world, do scientists never encounter problems with averaging non-linear equations? If they do encounter such problems, how do you suggest they deal with them?
Going off-topic, in your perfect world, what do Sabine, Lawrence, Baushev, and Pilipenko actually do when they get to their offices? You know, instead of puzzling over GR's equations, working on non-Abelian groups, or investigating simulations?
Modern theoretical physics may indeed be a "mess" ... but so far it seems (to me at least) that you have offered exactly nothing to replace it.
@bud rap and JeanTate: Siegel's article is admittedly favorable to ΛCDM. For all we know it might just be right after the dust settles. It is also likely to at least be partially correct.
DeleteIf you look at what I wrote, especially bud, the comparison with the square wave is interesting. The square wave is continuous but not everywhere differentiable. At 0, π/2, π, 3π/2 and 2π ~ 0 the function flips between 1 and -1. It is not differentiable there, and technically not continuous either. The Fourier series sum_n sin[(2n+1)θ]/n gives the square wave for the sum index → ∞, but with this overshoot. These Fourier terms are perfectly differentiable, so how can a sum of completely differentiable and continuous functions gives a function that is not every where differentiable and continuous?
So which is more physical? The square wave pulse with its infinite impulse front, or the series that has these funny overshoots, and is physically further truncated? The issue might be similar here. The artifact from the numerics might actually be telling us something physical. Now this is far more difficult and complex than a Fourier analysis of a square wave. Whether these annoying cusps in ΛCDM are real or not is a matter of some phenomenological research. If these have some plausible reason to be considered real then ΛCDM may have to be generalized in some way.
Dark matter, where this term will stick for some time, could either be a straight forwards sort of particle, or it could be some strange aspect of spacetime and maybe closely related to dark energy as some perturbation or local field effect. It could also be that this field effect as a source of gravitation could generate particles as well that might be weakly interacting or not interacting at all except through gravitation. Hossenfelder has proposed in fact dark matter might have different phases where it can be either of these. Verlinde has suggested DM could be some field effect of spacetime. If spacetime is built be quantum entanglements the event horizon is the upper bound on entropy, but where quantum states may be on an extremal quantum surface. The elementary case of a relative entropy between maximal mixed states with density matrix ρ* and the density of the extremal quantum surface ρ is S(ρ*|ρ) = N – S(ρ) where maximal mixed states have probabilities p = 1/N, in a microcanonical setting. Much the same may happen here, and dark matter may be some deviation in the structure of spacetime that reflects this relative entropy.
There are lots of fascinating physics to consider and grist for the mill. We also need to keep the standard DM theory of weakly interacting particles in mind, for this under tests is where alternatives have a chance of success or they fail.
Jean Tate,
DeleteJust for the record, the title of this blog post is Dark matter nightmare: What if we are just using the wrong equations?, it is not Averaging Non-Linear Equations which is at best a peripheral issue to the central question. If you can't even grasp the scope of the topic at hand, why should I consider your dyspeptic rant to have any merit. Follow your own advice, put up or...
@ bud rap
DeleteJust for the record, the title of this blog post is Dark matter nightmare: What if we are just using the wrong equations?, it is not Averaging Non-Linear Equations which is at best a peripheral issue...
I feel I ought to point out that Sabine herself says, elsewhere in the comments, "This blog is not a pin board for self-made dark matter theories. If you have something to say about averaging non-linear equations, you are welcome. If you want to discuss anything else, please find some other place."
@Lawrence Crowell: I like your square wave analogy (toy? cartoon?)!
Delete"Dark matter, where this term will stick for some time, could either be ...": I think it's also important to keep in mind that DM may turn out to be heterogeneous: a little bit (in some places) dark baryonic matter (my fave is a bottom heavy IMF in regions of very low metallicity, microlensing results would miss most of this); in others, misunderstood GR non-linearities; and perhaps some mix of WIMPs and axions; sprinkled with PBHs; with super-fluid for stock ... ;-)
"Einstein’s theory of general relativity is an extremely well-confirmed theory. Countless experiments have shown that its predictions for our solar system agree with observation to utmost accuracy."
ReplyDeleteIn our solar system. The problems occur at far distances, small gravitational fields, in the MOND-regime.
GR is made for spacetime near masses. At far distances, the influence of the central masses is assumed to be negligible, the curvature at far distances is assumed to approximate zero. BUT could it be possible that the curvature CROSSES zero to become negative at far distances? That would increase the gravitational forces relative to a curvature that approximates zero. ???
About Cooperstock's work: both drl and bud rap mention his work, and both cite astro-ph/0507619.
ReplyDeleteI feel it's a pity neither seem to have dug deeper. For example, that initial paper generated a flurry of attention, most of it negative in the sense that Cooperstock & Tieu's model was convincingly shown to be "unphysical". In particular, Fuchs&Phelps (https://arxiv.org/abs/astro-ph/0604022) show that it fails a key observational test, badly.
Cooperstock - sometimes with Tieu, sometimes not - wrote several subsequent papers, modifying their original model, and addressing various comments, questions, and criticisms. These later papers have fewer and fewer citations (the last has just one). And - it seems to me - those citations which are criticisms have become more and more general, which may partially explain why the many hundreds of people with relevant expertise have largely not bothered to respond. An example: https://ui.adsabs.harvard.edu/abs/2015IJMPD..2450065R/abstract ("On claims that general relativity differs from Newtonian physics for self-gravitating dusts in the low velocity, weak field limit", by D. R. Rowland).
So, it seems that this particular approach is a dead-end, at least with respect to this blogpost (about averaging non-linear equations).
But perhaps either of you - drl, bud rap - would like to have a go yourself (or -selves)? Would you care to work on Cooperstock and Tieu's approach, to see if you can come up with a model that actually works?
@JeanTate, I didn't need to "dig deeper" because I followed the events in real time. There was only one substantial criticism, by a graduate student, and it was made from a point of view of a superficial analogy with electrodynamics that does not obtain. Essentially he misidentified the role of the connection coefficients in GR - he confused gravitational fields and potentials, and so misunderstood the role of discontinuities in the matter source distribution. This was an almost embarrassingly elementary blunder, and was easily brushed aside by Cooperstock in several rebuttals and follow-on work. After that one engagement with the public, the work was forgotten and ignored.
DeleteThank you for the argumentum ab auctoritate and the repeated passive-aggressive advice to "have a go at it" - you should make this your posting signature. The work of Cooperstock is not a dead end, and I am indeed "having a go at it".
-drl
@drl: "the repeated passive-aggressive advice " My apologies. I have had difficulty judging mood etc, pretty much my whole life. Would "please either put up or shut up" be more appropriate, do you think?
Delete"I didn't need to "dig deeper" because I followed the events in real time. " and "the argumentum ab auctoritate": There may be some others who read these comments who also had the opportunity to follow the events in real time. However, the rest have only the published papers to rely on, as primary sources, wouldn't you say?
"There was only one substantial criticism, by a graduate student": to astro-ph/0507619? Would that be Korzynski (arXiv:astro-ph/0508377)? Or Vogt&Letilier (arXiv:astro-ph/0510750 and arXiv:astro-ph/0512553)? Or Cross (arXiv:astro-ph/0601191)? Or Garfinkle (arXiv:gr-qc/0511082)? Or Menzies&Mathews (arXiv:astro-ph/0701019)? Or ... (there are many more)?
In particular, I'm curious to know why the Fuchs&Phelps paper (arXiv:astro-ph/0604022) doesn't meet your criteria for being a "substantial criticism", especially as it shows inconsistency with observation.
"After that one engagement with the public, the work was forgotten and ignored." I guess you are referring to the work by that (unnamed) "graduate student"; are you?
"The work of Cooperstock is not a dead end, and I am indeed "having a go at it"." Glad to hear it. I look forward to reading your paper(s) when it/they are published (in arXiv?).
astro-ph/0604022 - non-Keplerian rotation curve not critically dependent such details. The model was not tuned, it was "assume a spherical cow" simple. Cooperstock left it for later to refine the model. Certainly local observations of our place in a galactic arm are irrelevant (there are no arms in the original model, just a disk of thin matter.)
Deleteastro-ph/0508377 Conclusively refuted. Yes this was the grad student.
astro-ph/0510750 Follow-on to the above, same error, can be ignored
astro-ph/0512553 Likewise
astro-ph/0601191 Likewise
gr-qc/0511082 Seems like word salad to me. The usual misconceptions about GR and its ontology. The author displays a very poor understanding of the subject by heading instantly in Eq (1) to the PN approximation - indeed he a specialist in numerical simulation so this is his game, and not a theorist of long-standing tenure as was Cooperstock, who was deeply involved in the foundations of GR for many decades. Author does not understand the essential nature of non-linear systems. Dismissed.
astro-ph/0701019 This seems to be a question posted as a paper. I'll look at it more closely because, lo! an actual question.
You'll have to try harder than this. :) Have a go at it?
-drl
PS, I consider the matter of the galactic disk to be conclusively solved, the most important thing to do now is to demonstrate that the same idea works for the anomalous radial velocity profiles in globular clusters, where dark matter isn't even an option. See for example MNRAS 428, 3196–3205 (2013).
Delete-drl
@drl: so it seems the actual history - per papers, citations, references, etc - is not quite as you characterized it ("There was only one substantial criticism, by a graduate student")! Instead of one person, there were at least four.
DeleteI note that neither the original Cooperstock&Tieu paper (astro-ph/0507619, submitted to ApJ), nor one of the subsequent ones (arXiv:astro-ph/0512048, apparently not submitted to any such journal), were actually published in a peer-reviewed journal. Kinda odd, don't you think, for "a theorist of long-standing tenure as was Cooperstock, who was deeply involved in the foundations of GR for many decades"?
Also kinda odd to me is that you cited the original (astro-ph/0507619). As many people quickly discovered, this contains a singular thin disk. This unphysical feature was addressed in at least one subsequent paper. But really strange that a) both you and bud rap cited astro-ph/0507619 as the definitive paper, and b) neither Cooperstock nor Tieu apparently thought this (obvious) flaw in their model worth noting, before submitting the paper to ApJ! :O
"You'll have to try harder than this. :) Have a go at it?" Sure. Next: some more papers critical of the (original or modified) Cooperstock model. Later: some responses to yours on specific 'critical' papers.
@drl: As promised, more 'critical papers' (note: I am not attempting to provide a complete list, merely some of the - many - examples): Zingg, Aste, & Trautmann (arXiv:astro-ph/0608299); another Vogt&Letelier (arXiv:astro-ph/0611428); another Menzies&Mathews (arXiv:gr-qc/0604092); another Korzynski (Journal of Physics A: Mathematical and Theoretical, Volume 40, Issue 25, pp. 7087-7092 (2007)); not really a criticism, Coimbra-Araújo&Letelier(arXiv:astro-ph/0703466); Rakic&Schwarz (arXiv:0811.1478); Ramos-Caro, Agón&Pedraz (arXiv:1206.5804); and Deledicque (arXiv:1903.10061).
DeleteI note that Cooperstock&Tieu published a paper in 2007 (arXiv:astro-ph/0610370) that made it into a peer-reviewed journal (International Journal of Modern Physics A, vol. 22, issue 13, pp. 2293-2325). It's a pity that you did not cite this paper. It's interesting, especially in light of your comment on astro-ph/0604022 ("Certainly local observations of our place in a galactic arm are irrelevant (there are no arms in the original model, just a disk of thin matter.)") ... the existence of arms did not stop C&T from applying their "no arms" model to galaxies with obvious spiral arms! :)
Jean Tate,
DeleteThe Fuchs&Phelps paper is disingenuous nonsense. You can't use the Keplerian or Newtonian method to predict the mass densities in the solar neighborhood either. Cooperstock's is a GR based model that successfully replaces those failed solar system derived oversimplifications for determining the expected rotation curves of galaxies. That's its only purpose; it can't predict mass densities in the solar neighborhood and it can't shine your shoes either. So what?
@budrap Precisely - it's a proof of concept paper to show that aggregate behavior in GR is not reducible to a simple linearized model, that residual effects persist even in the case of thin matter at non-relativistic speeds. Once that is accepted, you can move on to more complex models. The same proof of concept needs to be done for globular clusters, then for galaxy clusters.
DeleteIt's hard to remember that full GR involving a complex distribution of energy-momentum over an extended region is essentially untested. What is known with complete definiteness is the behavior of test particles in the field of a large central attractor. Cooperstock's work was the first practical use I had encountered of GR in a setting requiring all its features.
-drl
Jean Tate, Deledicque (arXiv:1903.10061) tries to savage the idea from Cooperstock while being politically neutral.
Delete@ JeanTate
DeleteThanks for the Fuchs & Phleps link [note the spelling: "Phleps", not "Phelps"]; kind of neat to see some of Stephanie's early work (from around the time we first met).
You're right: Fuchs & Phleps' demonstration is pretty devastating. I was particularly struck by the vertical profiles in their Fig.1. The estimated profile for the actual Milky Way looks perfectly normal, similar to the vertical profiles of dozens of other spiral galaxies that are seen edge-on. (And, of course, perfectly consistent with all the studies that find an approximately exponential profile with a scale height a few hundred parsecs). The predicted profile of Cooperstock & Tieu's model profile is just amazingly off.
@ drl: "The model was not tuned" -- of course it was. As Fuchs & Phleps note, "The coefficients kn and Cn have been determined by CT05 by fitting the corresponding model rotation curve to the observed rotation curve of the Milky Way and are given in their Table 1." CT05 also perform fits to rotation curves for three other spiral galaxies -- again, "tuning" their model.
And this has absolutely nothing to do with "our place in a galactic arm", since the vertical distribution of stars and gas varies only minimally between arms and inter-arm regions. If disks in general looked like C&T's model, this would be glaringly obvious and would have been seen decades ago in studies of edge-on spirals and lenticular galaxies.
@ bud rap
DeleteCooperstock's is a GR based model that successfully replaces those failed solar system derived oversimplifications for determining the expected rotation curves of galaxies. That's its only purpose; it can't predict mass densities in the solar neighborhood
It's actually kind of amusing that you make this claim, given your ranting elsewhere about about "empirically-baseless mathematical speculations". In other words, you're claiming that Cooperstock's model has no physical predictions that can be tested. Which means it's not a scientific model at all.
But of course this is completely wrong, since Cooperstock & Tieu explicitly use their model to make exactly the sort of predictions you claim aren't allowed. Their Figures 2 through 7 all contain radial mass density profiles derived from their best-fitting models, vertical density profiles (evaluated at the center of the galaxy, not at e.g. the Solar radius), or both. And on page 17, they compare, in a hand-waving way, their derived radial density profiles with published stellar-luminosity profiles (from Kent 1987, which is awfully old...) for the three external spiral galaxies whose rotation curves they modeled, and speculate that "The predicted optical luminosity fall-off for the Milky Way is at a radius of 19-21 Kpc based upon the density threshold that we have determined."
So Cooperstock & Tieu clearly think you can predict baryonic/stellar mass densities at various locations within the galaxy from their model. The only thing Fuchs & Phleps did was to take them at their word, and look at what their model predicted for the region of the galaxy near the Sun, where the data is the best.
(Note in case anyone is being confused by the term "solar neighborhood" and thinks this somehow applies to the immediate vicinity of the Sun: the term generally refers to a region of several hundred parsecs, even a kiloparsec or more, in radius. The vertical profile in the right-hand panel of Fuchs & Phleps' Figure 1 extends to over 3 kpc in height.)
@bud rap, @drl, @Daniel de França MTd2 (and @Lawrence Crowell above): Thanks for your responses. I shall be offline for several days, possibly a week. I hope to be able to post then.
DeleteThat 1903.10061 is an amusing paper. It references the grad student paper with the elementary blunder, but not the work itself against which the blunder was enacted. What could be more telling of the state of research into this topic? And why would "savaging" a novel idea by a deeply respected veteran researcher of many decades be something you would attempt? Everybody's a tough guy! :) If you want to read something really interesting, look up the discussions of gravitational waves and radiation involving Cooperstock, Infeld, Feynman etc. Those were the days when people discussed real issues with grown-up attitudes.
Delete-drl
@PeterErwin I do not understand why I cannot get you to see the point of his work. It is *not* intended to be a realistic model of a galaxy. It is "assume a spherical cow" type first cut work. The point is to show that the grossest feature of galactic motion - the non-Keplerian rotation curve, which is the strongest evidence for the dark matter paradigm and without which it cannot survive - can be explained within the framework of general relativity by retaining enough non-linearity that its essential *physical* aspects, relating to the behavior of aggregate matter distributed over a large volume, are not ignored. You cannot argue against this methodology with specifics about particular galaxies! You are totally missing the point. And why are you so defensive? This should be amazing news to everyone with genuine interest in galaxies! No, instead, it is perceived as a threat, which it may very well be in the social context of modern academic science.
Delete-drl
This comment has been removed by the author.
Delete@ drl
DeleteThe point of a spherical-cow model is to start off with something really simple, leaving out additional details and complications which may or may not be important (you don't know yet) -- and then see how well or poorly it compares with reality. If it sort of works, you can see if physically plausible modifications make the agreement with data better, or worse. If it really doesn't work, then you abandon it and try something else, hopefully having learned something in the meantime.
There are two approaches C&T could have taken:
1) Start with some reasonable approximation of the known baryon distribution of a galaxy (such as the Milky Way), and then see what kind of rotation curve that would predict, given their approach to using GR. If the agreement is reasonable, you can generate additional predictions (what it says about the motions of globular clusters and satellite galaxies, for example) and also try to refine it by using more accurate mass models as input, to see if that makes the agreement with the data better or not.
2) [What C&T actually did] Pick a particular set of data, such as the planar rotation curve for the Milky Way, and determine what kind of mass distribution is necessary to produce that data. If the agreement is reasonable, then you check to see how well or poorly your predicted mass distribution compares with other relevant data -- in this case, the actual distribution of baryonic mass. (And you can generate predictions for other data, such as globular cluster kinematics, etc.) C&T understand this (thought they do little to actually follow through on it), since they say (p.7 of the 0507619 preprint):
"... the simpler way to proceed in galactic modeling is to first find the required generating potential Φ and from this, derive an appropriate function N for the galaxy that is being analyzed. With N found, [Equation] (12) yields the density distribution. If this is in accord with observations, the efficacy of the approach is established."
C&T, unfortunately, kind of skipped the second part of this, except insofar as they find the MW mass distribution is flattened and made hand-waving textual comparisons with surface-brightness profiles from the 1980s for their three non-MW galaxies. But they at least have some awareness of the fact that models are meant to be compared with reality outside of just the cherry-picked data subset the model was fitted to, which is a point you seem to be confused about.
Fuchs & Phleps went ahead and tested the model by comparing its predictions to actual data for our galaxy. In other words, Fuchs & Phleps did what scientists are supposed to do with models, spherical-cow or otherwise.
@ drl
DeleteLet me explain, from an observer's point of view, why I'm not that impressed by Cooperstock & Tieu's paper: it doesn't get rid of the dark matter problem, because their models require significant dark matter in galaxies in order to fit the rotation curves.
This is true for every one of the four galaxies they model.
They note that their best-fitting model for the Milky Way's rotation curve predicts a total integrated mass of 21 x 10^10 solar masses. The best current measurements for the baryonic mass of the Milky Way are 5 x 10^10 solar masses in stars and stellar remnants, and about 10^10 solar masses in gas (e.g., Flynn 2006; arXiv:astro-ph/0608193). So their Milky Way model requires about 3.5 times as much mass as is known to be present in baryonic form.
For NGC 3031 and NGC 7331, they predict total masses of 1.1 x 10^11 and 2.6 x 10^11 solar masses, respectively. They note that these are less than Kent's (1987) total masses -- but the latter include dark matter. If you compare C&T's results with the baryonic mass in Kent's models for these galaxies -- 7 x 10^10 and 1.2 x 10^11 solar masses, respectively -- then they require about two to three times as much mass.
The worst case is NGC 3198, for which they require a total mass of 1.0 x 10^11 solar masses. The baryonic part of Kent's best-fitting model is only 9.3 x 10^9 solar masses -- more than a factor of ten smaller than C&T's mass.
A more modern, non-dynamical estimate of NGC 3198's total baryonic mass from Stark et al. (2009; arXiv:0905.4528), combining separate measurements of stars and gas, would be 2.8 x 10^10 solar masses, assuming a distance of 14.5 Mpc. This would be 1.1 x 10^10 solar masses using Kent's assumed distance of 9.2 Mpc, so his estimate of 9.3 x 10^9 was really pretty good. In this case, C&T's mass estimate is still nine times larger than the total baryonic mass in this galaxy.
So when they apply their model to fit the rotation curves of four spiral galaxies, they end up needing two to nine times more mass than is present in the visible baryons -- in other words, they need lots of dark matter.
Upon reflection, perhaps this is at some level "amazing news to everyone with genuine interest in galaxies" -- it demonstrates that even when you account for nonlinearities in GR in the way that they do, you still need significant amounts of dark matter to explain (disk) galaxy rotation curves!
Peter Erwin,
DeleteThank you for offering a substantive critique, which I don't consider F&P to be. What is substantive is your pointing out that the C&T model still requires some additional "missing matter". That is meaningful and valuable information, but not, I suspect, in the way you think it is.
What is interesting about this, is that the C&T approach seems to mimic observations such as the Radial Acceleration Relation wherein the "missing matter", if any, tracks the baryonic distribution. I believe that is the case here, isn't it?
If so, and if your interest lies in the advancement of scientific understanding, rather than merely fending off any alternatives to the empirically-failed standard LCDM model, you might consider this an intriguing and plausible, though incomplete, step in the right direction.
It appears to me however, on the basis of what you've written here, that you are arbitrarily dismissing this GR approach, rather than seeing it as opening an avenue for further elaboration. Am I correct in that assessment, or do I misread you?
bud rap,
DeleteYou dismiss the F&P paper as "disingenuous nonsense" and not a "substantive critique"of C&T.
It is true that the C&T model accurately predicts RAR under certain conditions - but the authors fail to consider whether the model has any relevance to other observed factors. This was picked up by F&P and they demonstrated, conclusively, that the C&T model is singular and cannot be applied, as it stands, in a general case (astro-ph/060422 "model of CT05 for the Milky Way, which was so constructed that it gives an excellent fit of the observed rotation curve, has singularly failed to reproduce the independent observations of the local Galactic mass density and its vertical distribution. This one counter example casts, in our view, severe doubts on the viability of Cooperstock & Tieu’s theory of the dynamics of galactic disks in general").
F&P agree with you to a degree "the C&T approach seems to mimic observations such as the Radial Acceleration Relation". But there wouldn't be any point to the C&T paper if this wasn't the case.
It is very difficult to argue that pointing out (with concrete examples) theory does not match accepted observation is "disingenuous". To do so requires that the observations can be proven to be falsely presented, which does not seem to be the case in this instance.
Now it may be the case that the work of C&T is "an intriguing and plausible, though incomplete, step in the right direction" as you claim. However it is clear that in it's current form it fails empirical tests (and may be technically in error, see e.g. astro-ph/0701019.pdf "Here, the calculation of tangential velocity is questioned"). Raising these issues is simply subjecting the model to scientific scrutiny. Which is,I hope we can all agree,necessary for the "advancement of scientific understanding".
@ bud rap
DeleteWhat is interesting about this, is that the C&T approach seems to mimic observations such as the Radial Acceleration Relation wherein the "missing matter", if any, tracks the baryonic distribution. I believe that is the case here, isn't it?
No, it isn't. The RAR, which is basically a reformulation of MOND, is about starting with the known baryonic mass distribution of a galaxy and asking how you have to alter the Newtonian acceleration produced by that mass distribution in order to explain the observed velocities.
C&T's approach is to start with just the velocity data and deduce (by fitting the velocities) a mass distribution, which they then (unfortunately) fail to compare with the baryonic mass distribution. The extent of their actual comparison is to compute the radial mass-density profile (for z = 0) and the vertical mass-density profile (at r = 0), note that it implies something "flattened" -- but not by how much! -- and then declare victory. (Well, they say, "We see that the distribution is a flattened disk with good correlation with the observed density data for the Milky Way", but this is a nonsense statement, since they don't actually compare their model distributions with any MW data.)
What Fuchs & Phleps show is that the C&T mass distribution is significantly different from the baryonic mass distribution of the Milky Way, not that it "tracks" the baryonic distribution. The mass distribution of C&T's best-fitting model is much thicker than the actual baryonic mass distribution. This means that the "missing matter" does not track the baryonic distribution. (By "much thicker" I mean this: from Fig.1 of Fuchs & Phleps, the exponential scale height of the real data [right-hand panel] is ~ 400 parsecs, while the exponential scale height of the C&T model is about 6000 parsecs: ten times thicker than the real galaxy at the Milky Way's radius, while at the same time being about six times less dense at the disk midplane.)
C&T's result is, arguably, roughly consistent with the main dark-matter paradigm (for disk galaxies, anyway): that the thin, disky distribution of baryonic matter is embedded within a more spherical distribution of dark matter, both of which contribute to the observed velocities. The sum of the baryonic and dark-matter components will crudely resemble a thickened disk (transitioning to mostly spherical at large radii, mostly beyond the range of the velocity data used by C&T). So it's plausible that their derived mass distribution is an (imperfect) attempt to account for both the thin, disky, baryonic matter and the more spheroidal distribution of dark matter. (That's if you generously assume none of the various theoretical criticisms of their work are valid.)
Peter Erwin,
DeleteThe RAR, which is basically a reformulation of MOND...
You are misinformed about the RAR; it is an empirical relationship:
https://arxiv.org/abs/1609.05917
RGT and Peter Erwin,
DeleteBoth of your criticisms seem deliberately misleading. C&T are not attempting to model galactic disk(s). They are using GR to model galactic rotation curves using an idealized "uniformly rotating fluid without pressure and symmetric about its axis of rotation" with a continuous distribution. Nothing in the original or subsequent papers suggests they are attempting to produce a fine-grained model of the disk density profile.
The abstract is quite clear about the authors' intent:
A galaxy is modeled as a stationary axially symmetric pressure-free fluid in general relativity... It is shown that
the rotation curves for the Milky Way, NGC 3031, NGC 3198 and NGC 7331 are consistent with the mass density distributions of the visible matter concentrated in flattened disks.
F&P amounts to criticizing a method for estimating galactic rotation curves, for not accurately predicting the mass distribution in the local solar neighborhood. I don't believe you can use either the Keplerian or Newtonian methods, commonly used for deriving the expected galactic rotation curves, to estimate the local mass distribution either. Why this special requirement for the GR approach?
This is @ everyone who's commented on the Cooperstock idea ...
Delete1) I think it's a pity that dlr and bud rap cited the first paper (astro-ph/0507619) rather than the 2006 one (astro-ph/0610370, CT06). The 2006 paper is, IMNSHO, far better.
2) I echo Peter Erwin (Kent 1987?!?) but go further: why choose only three of the 16 galaxies? There may be a good reason, but CT06 don't give any. And why not choose more recent data? It's not like there were ~no papers presenting galaxy rotation curves between 1987 and ~2004!
3) As I understand it, the CT06 model has 20 "free" parameter values, yet two of the four galaxy rotation curves have barely more than this number of data points!
4) I can't seem to find the source of the Milky Way data; can someone point me to it please? I'm quite curious because the Milky Way has a prominent bar, while the other three galaxies do not (CT06 do not attempt to model bars).
5) No "error bars", "confidence intervals", etc.
6) No mention of bulges! There are indeed a tiny handful of spiral galaxies which seem to lack bulges, but the four galaxies in CT06 certainly have bulges.
7) It seems ~no one has done any follow-up work, no papers on developing the model, none on matches with observation. It's coming on 15 years now. Do you, drl or bud rap, know of any significant subsequent work?
8) In particular, there is now quite a lot of data on rotation curves waaay beyond the historical (optical) edge, especially HI data.
9) From CT06: "it is our request to our astronomical colleagues to kindly provide us with the data for rotation curves in planes of different z values." This is now becoming available by the bucketful, for example from the various IFU surveys (e.g. MaNGA).
That'll do for now ... except that CT06 - at least in the arXiv version - refer to "B. Fuchs and S. Phelps" ;-)
@drl, bud rap, Peter Erwin (PE), RGT: FWIW, I agree with PE and RGT, the Fuchs&Phleps paper (arXiv:astro-ph/0604022, FP06) is pretty devastating, in terms of showing quite convincingly that what's presented in CT06 is not consistent with good observational data.
DeleteCT06 address FP06 thusly:
"Even if one were to assume a logical basis for making their comparison between local observed data and our globally derived very approximate data, it should be noted that the vertical distribution of observed stars in the local galactic disk has a sharp peak [30]."
"However, there will naturally be some variation in the location in z for these peaks. Due to the rapid decline in density, there will be large local variations in density and therefore the criticisms in [29 - this is FP06] about our distributions are further seen to be inappropriate."
I am quite puzzled by both.
In the first, I think CT06 may have misunderstood [30] ... where's the "sharp peak"?
In the second, AFAIK there are no such "large local variations in density" reported in the literature (at least, not up to ~2006), and CT06 do not reference any actual observations.
In any case, almost 14 years have passed, and a great wealth of high-quality, highly pertinent astronomical data is now available to test the models in CT06 (and refinements). Wouldn't it be a good idea for someone very familiar with CT06 to write an update? An update which includes tests of consistency with observational data?
@ Jean Tate
Delete1) I think it's a pity that dlr and bud rap cited the first paper (astro-ph/0507619) rather than the 2006 one (astro-ph/0610370, CT06). The 2006 paper is, IMNSHO, far better.
Figure 3 of that version makes it really clear that their mass model doesn't resemble the baryon distribution of the Milky Way. That figure shows the mass distribution of disk embedded within a massive, spheroidal halo -- much closer to the dark-matter scenario of a baryonic disk embedded within a rounder, dark-matter halo.
Really, I'm starting to get a strong impression that if Cooperstock & Tieu's approach is valid, then it just confirms the need for dark matter of some kind.
2) I echo Peter Erwin (Kent 1987?!?) but go further: why choose only three of the 16 galaxies? ...
Yes, it's odd that they don't provide any justification at all for using just those three galaxies.
4) I can't seem to find the source of the Milky Way data; can someone point me to it please? I'm quite curious because the Milky Way has a prominent bar, while the other three galaxies do not (CT06 do not attempt to model bars).
You're right, they don't seem to say anything about the source of the data they're fitting. This probably doesn't invalidate their results by itself, but it's a signature of a sloppy and careless approach toward data (and a failure to credit other researchers' work).
As for the issue of bars: I don't actually think this is relevant for this level of analysis. The Milky Way's bar extends to only 5 kpc in semi-major axis, so the effect of its non-axisymmetry is probably pretty minimal beyond 10 kpc or so. (Also, NGC 3031 is probably double-barred rather than unbarred -- see Gutierrez et al. 2011 [astro-ph/1108.3662] -- but that's again probably not relevant at this level of analysis and fitting.) Most dark-matter-fitting models don't worry about bars, either.
You had me going there, Sabine! Before following Peter's link I was utterly baffled why Frisians in Dokkum, Netherlands, were being so definitive about the distribution of dark matter... :)
ReplyDeleteReading -- I do not, please note, write "understanding" . . . the above comments on Dark Matter enables me to appreciate why so many lay people are skeptical about the conclusions ("settled science") drawn by scientists who analyze climate change.
ReplyDeleteFor many years, now, the interested lay person was assured that Dark Matter just HAD to exist.Now, mathematicians and physicists with awesome skills seem to be hedging their bets and citing a possible methodological error as the New Normal. (It is as if some of the most skilled people on earth suddenly chanted together, "My Bad!")
Was there a Big Bang? Is the universe expanding -- and is it doing so at an accelerating rate? Does Dark Matter exist? Does Dark Energy exist? Does Suzy exist? Are there convincing reasons to believe in String Theory, the Multiverse, the Grand Inflator (or is that "Inquisitor?") and are umpty-ump billions of identical "me"s typing this comment in as many additional rooms of a slightly different shade of chartreuse?
As JFK self-criticized after the Bay of Pigs fiasco, "All my life my father warned me not to trust the experts!"
If the 90% of Reality (Dark Matter and Dark Energy together) we lay persons were assured existed turns out to be phantasms summoned out of the Vasty Deep simply becuase someone forgot to carry the "two," then what other "settled science" (I'm looking at YOU, Mr. Obama!) ought we ignore?
If the future of our planet was threatened by the existence of Dark Matter then it would definitely be better to work on the assumption it exists
DeleteDoubt and challenge by all means. And if eventually proved beyond reasonable doubt that we got our sums wrong then we all would breath an immense sigh of relief and hand out Nobel Prizes to the doubters.
But use doubt as a reason to do nothing? That would not be good science would it?
It would be an enormous mistake to confuse operational questions over the application of general relativity to the cosmos, with weather and climate forecasting. The latter is constructed from year after year of detailed empirical data, combined with computer modeling, with ongoing refinement of the model based on actual data. There is little such input to GR. The problems are not remotely similar. Saying that GR is like fluid flow is a statement about the mathematical structure of the theories, not the sort of problem they describe. I agree with you that one spectacle after another in which people go on television with those dreck shows hosted by Kaku et. al. and make outrageous claims, and these are shown again and again to be wrong-headed, brings down the confidence in science as a whole, and encourages skeptics who have ulterior motives.
Delete-drl
I note (and am a bit disappointed) that almost nobody here has posted comments about the actual topic of Sabine's article: how can the local equations of General Relativity be averaged at various scales, depending of the observations to which we try to match our models with?
ReplyDeleteDoes anybody here know the various existing approaches, which are summarized in the paper of Ellis and al. referenced by Sabine at the end of her article?
Don't you think (especially Sabine) that Buchert's method for the averaging of the scalars over an inhomogeneous comoving spatial domain (https://arxiv.org/abs/gr-qc/9906015) and/or Wiltshire's timescape model (which uses Buchert's equations) may at least help to solve the current tension between all the measurements of Hubble constant based on the observations of the late universe (Riess et al., etc.) on one side, and the value estimated from the observations of the early universe (Planck) with the help of the LambdaCDM model on the other side?
Both of these approaches are uniquely based on General Relativity and would need no "new physics" to explain a departure from the Friedmannian models in the late (and lumpy) universe.
My try-outs have been exponential expansions like a = GM/rr e^(2GM/rcc) for needed acceleration in GR-based gravitational spherical field to maintain a radius to the center sink of the gravitational field.
DeleteThat helps with the singularity problem (vanished) and the treatment with the change fluctuation energy of the gr-field when balancing with dark mass parameter...