The maybe first thought experiment came from James Maxwell and is known today as Maxwell’s demon. Maxwell used his thought experiment to find out whether one can beat the second law of thermodynamics and build a perpetual motion machine, from which an infinite amount of energy could be extracted.
Yes, we know that this is not possible, but Maxwell said, suppose you have two boxes of gas, one of high temperature and one of low temperature. If you bring them into contact with each other, the temperatures will reach equilibrium at a common temperature somewhere in the middle. In that process of reaching the equilibrium temperature, the system becomes more mixed up and entropy increases. And while that happens – while the gas mixes up – you can extract energy from the system. It “does work” as physicists say. But once the temperatures have equalized and are the same throughout the gas, you can no longer extract energy from the system. Entropy has become maximal and that’s the end of the story.
Maxwell’s demon now is a little omniscient being that sits at the connection between the two boxes where there is a little door. Each time a fast atom comes from the left, the demon lets it through. But if there’s a fast atom coming from the right, the demon closes the door. This way the number of fast atoms on the one side will increase, which means that the temperature on that side goes up again and the entropy of the whole system goes down.
It seems like thermodynamics is broken, because we all know that entropy cannot decrease, right? So what gives? Well, the demon needs to have information about the motion of the atoms, otherwise it does not know when to open the door. This means, essentially, the demon is itself a reservoir of low entropy. If you combine demon and gas the second law holds and all is well. The interesting thing about Maxwell’s demon is that it tells us entropy is somehow the opposite of information, you can use information to decrease entropy. Indeed, a miniature version of Maxwell’s demon has meanwhile been experimentally realized.
But let us come back to Einstein. Einstein’s best known thought experiment is that he imagined what would happen in an elevator that’s being pulled up. Einstein argued that there is no measurement that you can do inside the elevator to find out whether the elevator is in rest in a gravitational field or is being pulled up with constant acceleration. This became Einstein’s “equivalence principle”, according to which the effects of gravitation in a small region of space-time are the same as the effects of acceleration in the absence of gravity. If you converted this principle into mathematical equations, it becomes the basis of General Relativity.
Einstein also liked to imagine how it would be to chase after photons, which was super-important for him to develop special relativity, and he spent a lot of time thinking about what it really means to measure time and distances.
But the maybe most influential of his thought experiments was one that he came up with to illustrate that quantum mechanics must be wrong. In this thought experiment, he explored one of the most peculiar effects of quantum mechanics: entanglement. He did this together with Boris Podolsky and Nathan Rosen, so today this is known as the Einstein-Podolsky-Rosen or just EPR experiment.
How does it work? Entangled particles have some measureable property, for example spin, that is correlated between particles even though the value for each single particle is not determined as long as the particles were not measured. If you have a pair of particles, you can know for example that if one particle has spin up, then the other one has spin down, or the other way round, but you may still not know which is which. The consequence is that if one of these particles is measured, the state of the other one seems to change – instantaneously.
Einstein, Podolsky and Rosen suggested this experiment because Einstein believed this instantaneous ‘spooky’ action at a distance is nonsense. You see, Einstein had a problem with it because it seems to conflict with the speed of light limit in Special Relativity. We know today that this is not the case, quantum mechanics does not conflict with Special Relativity because no useful information can be sent between entangled particles. But Einstein didn’t know that. Today, the EPR experiment is no longer a thought experiment. It can, and has been done, and we now know beyond doubt that quantum entanglement is real.
A thought experiment that still gives headaches to theoretical physicists today is the black hole information loss paradox. General relativity and quantum field theory are both extremely well established theories, but if you combine them, you find that black holes will evaporate. We cannot measure this for real, because the temperature of the radiation is too low, but it is measureable in principle.
However, if you do the calculation, which was first done by Stephen Hawking, it seems that black hole evaporation is not reversible; it destroys information for good. This however cannot happen in quantum field theory and so we face a logical inconsistency when combining quantum theory with general relativity. This cannot be how nature works, so we must be making a mistake. But which?
There are many proposed solutions to the black hole information loss problem. Most of my colleagues believe that the inconsistency comes from using general relativity in a regime where it should no longer be used and that we need a quantum theory of gravity to resolve the problem. So far, however, physicists have not found a solution, or at least not one they can all agree on.
So, yes, thought experiments are a technique of investigation that physicists have used in the past and continue to use today. But we should not forget that eventually we need real experiments to test our theories.
Much though I love JCM, I don’t think he was the first with a thought experiment. For example, Newton did one imagining firing a cannonball at greater and greater velocity from a mountain, until it started to orbit.
ReplyDeleteYes, good point.
DeleteThought experiments go back further. Giodorno Bruno argued against the idea the solar system was the only one. His reasoning was that if the stars were on some solid sphere and one shot an arrow outwards and hit this, the shell could only stop the arrow if it could elastically stretch, no matter how slightly, into some region beyond. He then argued space goes on endlessly and the stars were much like the sun.
DeleteBruno got to experience what it would be like to be a human barbecue as he was burned at the stake for this. This was around the time of Galileo who was upsetting apple carts as it was, where Bruno's ideas went beyond the theological pale.
Newton did present various thought demonstrations in the Principia, but those were all within the comfort zone of well understandable logical implications of the theory. It would be worthwhile to have a specific name for a thought demonstration that is specifically intended to be the harshest possible stress test. The current situation is that various types are lumped under the same name: 'thought experiment'.
DeleteIn retrospect: historians of science describe that Galileo's description of dropping two different masses from a tower was a thought experiment, for the purpose of demonstrating that the Aristotelian supposition that heavier things fall faster leads to a self-contradiction. Connect two objects with a loose string. Does that suddenly constitute a single object that is twice as heavy as the constituent objects? The Aristotilian doctrine is put to a stress test, and the stress test reveals a self-contradiction.
Aristotle was also quite keen on thought experiments. He asked what kind of motion could an object manage in a void. Then he gave several arguments that these weren't possible. Hence there could be no such thing as a void.
DeleteIt's the same reasoning that Newton reached for when he admitted that his notion of universal gravity as action at a distance was physically/philosophically dubious but he wasn't able to come up with a mechanism.
This is the problem that Einstein solved by showing that the transmitting medium was spacetime itself.
Aristotle lucked out in the case of gravity as gravitational mass is inertial mass. Had he been thinking of charged particles then he wouldn't have been wrong. He also did say that objects have a motion natural to themselves. He might not have identified it as uniform motion in a straight line. But he was on the right track.
I don't know why Aristotle gets so much stick these days - still. There are surely bigger problems in the world than Aristotelianism.
Sabine write "There are only two rules for thought experiments: (A) relevant is only what is measureable and (B) do not fool yourself. This is not as easy as it sounds."
DeleteAs emphazised by Cleon Teunissen, this rule (A) is false for thougth experiments named "paradoxes"' which reveal a logical contradiction. As long as it is a actual paradoxe, ie if rule (B) is respected.
@ Sabine,
ReplyDeleteEinstein’s “equivalence principle” is a successful heuristic which led Einstein to GR. But it is not true in GR since the curvature can be measured locally!
Depends on what you mean by "local". I get back to this in next week's video.
DeleteIt's more than a successful heuristic. It meant he identified gravitational and inertial mass which previously had just been thought coincidental. That's already remarkable.
DeleteAnd there's Galileo, with his light and heavy mass connected by a string...
ReplyDeleteWasn't Galileo's dropping balls of different masses a thought experiment?
ReplyDelete@ Sabine,
ReplyDeleteYou write: "General relativity and quantum field theory are both extremely well established theories...".
GR is both empirically well estabilished and mathematically well defined. On the other hand, QFT is only defined by a non-convergent perturbation expansion. Physicists add up a few of its terms and get a miracle of agreement with experiment.
QFT is not defined by this expansion, it's just that we use this expansion in most cases to calculate observables.
DeleteThe Langrangian used to define QFT is well-defined and defines a well-defined classical field theory using the Euler–Lagrange equations. But the Feynman Integral used to define QFT isn't mathematically well-defined.
Delete@ Sabine,
DeleteHow would you define QED?
No one debates that, but you seem to be missing the point. I said it's not defined by the expansion. Pick any of those.
Delete@ Sabine,
DeleteAll "Axiomatic quantum field theories" are only known to have the free field as a model in 4D. It's possible that if we ever do construct a rigorous QED it might satisfy very different axioms. For example, the observables might not be Hermitian operators in a Hilber Space.
Yes, quite possibly. I still think you're missing my point though.
Delete@ Prof. David Edwards
DeleteThe point you are missing is that both QFT and GR are well established “effective” field theories within their respective range of validity. There are many non-perturbative extensions of QFT and its methods (more or less successfully proven) that target condensed matter applications, statistical physics of systems outside equilibrium, as well as fluid and plasma physics.
@ Ervin,
DeleteTo my knowledge there are no mathematically well-defined 4D nontrivial QFT. If you know of one I'd appreciate a link to its description.
@ Sabine,
DeleteYes, Theories are defined by their Lagrangians; but this works fine for classical theories but not so well for QFTs. For QFTs like QED and the SM this only leads-so far-to a mess of heuristics that work miraculously well in practice!
@ Prof. David Edwards
DeleteIt’s quite obvious we are not on the same page.
By construction, effective field theory (EFT) is an operational approximation that evades the need to be mathematically well-defined in 4D. Effective Lagrangians are specifically built to approximate the “low”-energy phenomena from the “heavy” sector of the theory. In this sense, the Standard Model (SM) is a closed and self-consistent EFT up (at least) the low TeV scale. It is described by the most general renormalizable Lagrangian consistent with Lorentz and gauge symmetries.
As a general rule, effective Lagrangians are built from local operators organized by dimension, with operators of high dimension being suppressed by powers of the heavy scale. EFT is considered helpful tool for probing physics beyond SM.
I'm not a scientist. Nonetheless, what if entangled particles are not entangled with other but are mutually entangled with space/time.
DeleteProf Edwards,
DeleteTheories are not defined by their Lagrangians either. You first need to define what manifold you're working on, if any, what functions you have on it, what operators, what symmetries, what convergence properties, and so on and so forth. What I am saying is that it takes a lot of assumptions to define a theory and in practice physicists do not actually proceed by writing down definitions. Which is why, as Ervin says correctly, we have non-perturbative methods despite the fact that according to you QFT is either merely defined perturbatively or ill-defined (your comments are somewhat ambiguous in that regard).
@ Sabine,
DeleteI totally agree that physicists have a working informal definition of QFT that is reasonably satisfactory to theoretical physicists. What we have here is a 2-cultures problem: Mathematical physics vs. Theoretical physics! From the point of view of Mathematical Physicists like Glimm and Jaffe QED and the SM have not yet recieved an adequate "definition" or "construction".
@ Ervin,
DeleteBesides perturbation expansions, one also has finite
lattice ‘approximations’, Euclidean techniques, and the super-classical limit ‘approximations’. Physicists seem reasonably satisfied with this situation. They sometimes express the belief that eventually we’ll have full mathematically well-defined SM, or, at least, of some deeper theory, such as a non-perturbative string theory. I’m not holding my breath! In fact, I don’t expect to live to see such a culmination. It’s interesting to contemplate the opposite assumption; namely, either because we’re not clever enough or because it simply doesn’t exist, that we’ll never have such mathematically well-defined theories. One would then have to settle for a variety of ‘approximations’ having some vague, informal
relationships to one another, but without any central, rigorous theory! Most physicists don’t seem to be overly concerned by this issue and have retreated to being satisfied with effective field theories all the way down. Furthermore, effective field theory philosophy is merely a cover for having in practice abandoned the ideal of unity in exchange for the practice of applied mathematics.
The black hole entanglement problem is a case where Hawking radiation is entangled with the black hole. In that sense we can say a quantum particle in this case is entangled with spacetime, as a black hole is a vacuum solution to GR. Where things get strange is that Hawking radiation emitted later is entangled with the black hole and with previously emitted Hawking radiation. This means the older Hawking radiation entangled previously with the BH is now entangled with Hawking radiation emitted later. A bipartite entanglement has been transformed into a tripartite entanglement. This is not something which happens by unitary evolution of quantum mechanics.
DeleteBut just as a Hawking radiation particle is entangled with a black hole, the BH also adjust its mass. In the semi-classical approach this is done with backreaction of the metric. This is a “by hand” adjustment of the metric. However, physically it is reasonable to think there is the emission of a quanta of gravitational wave emitted. If so, then there is an addition quantum state involved.
This does not end the problem though, for at the end of the BH the Hawking radiation is emitted at very high energy and these gravitons will become UV or high energy. At this point further difficulties enter into the picture. This is because the graviton becomes a nonlinear state that is not defined by the linear operator/state theory of QM.
@ Prof. Edwards
Delete"Physicists seem reasonably satisfied with this situation."
I disagree. Few theorists are truly satisfied with the status quo, which exclusively rely on perturbative expansions and effective field approximations.
There are many ongoing efforts targeting beyond the Standard Model physics in both mainstream and unconventional research. It is unclear which path will ultimately be successful, but I remind you that plunging into the unknown is the trademark of any scientific endeavor.
@Prof Edwards:
Delete3d Chern-Simons field theory that has a quantisation that is rigorously defined through state sum models. It matches what we find via path integrals in the perturbative sector, but more importantly it's non-perturbative.
This might sound like a bit of esoteria. What has Chern-Simons got to do with the real world? Well, it turns out that GR in 3d is basically Chern-Simons. It's a topological theory so the metric is irrelevant. It's also why we can get a full quantisation so 'simply'. The phase space is finite-dimensional as there are no local dof - only global ones. It does mean in some sense we have a successful quantisation of GR in 3d!
Since QFT basically depends on infinite dimensional calculus I don't really see how we can get mathematically rigorous QFT until mathematicians settle on just how the calculus in infinite dimensions should be done. This is going to take some time as the calculus ramifies there with many different options.
"But the Feynman Integral used to define QFT isn't mathematically well-defined."
DeleteIt becomes well defined after regularization. And while that's done in an ad-hoc manner, the regularization is not simply a mathematical trick, it has a real physical origin.
There exists an unknown theory at a fundamental level and your Lagrangian should be considered as a low energy effective field theory that comes with a regularization that would be unambiguously well defined if you knew the fundamental theory and you could integrate out the short distance degree of freedom from that theory. The regularization is then defined by what you have integrated out.
That we don't know what the correct fundamental theory is and how you would then go about integrating out degrees of freedom to get to the field theory Lagrangian we're using doesn't mean that we can just pretend that this doesn't exist. It's reasonable to assume that it does exist, and therefore you have a Lagrangian that comes with an unknown regularization procedure. And because we usually deal with renormalizable theories (which is also very plausible because integrating out plus rescaling would naturally lead to non-renormalizable terms vanishing), the details of the regularization can be eliminated.
@ Saibal Mitra,
DeleteSend me links to your claims:
gauss346@gmail.com
This works in much the same way that 1+2+3+4+... =-1/12. The answer is strange, but this sum defines a quadratic function that has the y intercept -1/12. This Ramanujan sum is related to zeta functions and their application in zeta function regularization.
DeleteMany thanks to all contributors to this string. It opened applications I never conceived of, and pointed me to references and POVs which are challenging everything I thought I knew. It is a poster child for why I enjoy this blog so much.
Delete
ReplyDeleteAs an engineer, I've never understood the reverence for "thought experiments," which seem a perfectly natural part of thinking.
Engineers attempt to extend the range of the possible by techniques we call "What if ...," or hypothesizing boundary conditions, or assuming designs without initial regard to physical realizability.
When the impossible simply has to be overcome, better engineers typically try to turn "impossible" into "irrelevant," whatever they call the technique used.
I think mankind always has.
Best regards to all.
I think the reverence is for Einstein and it then spills over elsewhere on words like gedanken experiment.
DeleteIf engineer way of thinking exclude the reverence for logical paradoxes, it is just not sufficently equipped for theoretical research
DeleteJean Paul,
DeleteYou're probably right. For sure, reverence for anything is seldom our strong point.
The following straightforward thought experiment is so simple and obvious that escaped the attention of Physics.
ReplyDeleteThought Experiment:
1.Isolated system A (in absence of external forces e.g. in outer space) is powered internally (power and motor on board)
2.The internal parts of system A are linearly entangled (through a straight metallic thick thread) with each other
3.An internal action force applies to a part of system A, leads to the development of a collinear reaction force (opposite to action) to the rest of the system
4.Due to (3) the system cannot acquire a momentum (Newton's 3rd law of motion holds)
5.Isolated system B (in absence of external forces e.g. in outer space) is powered internally (power and motor on board)
6.Isolated system B is a real linear actuator (nothing to do with collinear forces)
7.The internal parts of system B are helically entangled (through a straight metallic leadscrew) with each other
8.A clockwise rotation of the screw, creates a counterclockwise (conservation of angular momentum) induced internal action force
9.Because of (8), the internal part starts to accelerate to the right
10.Because of (8), it is allowed only unidirectional induced forces to be developed
11.Because of (10) the induced internal reaction force will be also counterclockwise
12.Because of (11), Newton's 3rd law of motion breaks (doesn't hold) and the system starts to accelerate in the same direction as the internal part (9)
For more details on the above, please check my profile.
Could someone help debunk (if possible) the above thought experiment?
Hi, John,
DeleteThe problems are with: condition 7and 8; the physical mounting of the power source for rotating the screw ( eg, a stepper motor); and the physical connection of the internal parts, or not, to the outer structure of B. In every configuration I can visualize, there will be no net angular acceleration.
in my Sophomore year at MIT, I designed a perpetual motion machine. My instructor (Dudly Buck, a cryogenic innovator) was delighted. He encouraged me to block out the system with all force and loss equations, and patiently let me explain them to him. When I finally found the hidden (to me) friction loss (obvious from the start, to him), he gave me a pat on the back, a great big smile, and said
"nice try."
Kindest regards, Bert
Sorry, but ...., this is far too confusing to comment on. What is meant be an entanglement here is unclear and in particular, "linearly entangled (through a straight metallic thick thread) with each other." It is not possible to really comment.
Delete@Korean... and @Crowell,
DeletePlease visit my Profile and then you will see.
@Crowell
Linearly entangled means: You have a system that consists of two parts that are connected with a metallic tube. A third part is enclosed in the above system and may slide over that tube. Essentially all these parts are linearly entangled by following the notion of collinear forces (see Newton's 3rd law).
Helically entangled, means: You have a system that consists of two parts that are connected with a metallic leadscrew (see screw helical threads). A third part is enclosed in the above system and cannot slide but evolve over the screw helical threads. Essentially all these parts are helically entangled (over the leadscrew) by following the notion of induced forces (nothing to do with Newton's 3rd law).
Induced force: The input force is perpendicular to the resulting force where the latter evolves over the leadscrew. It means the input force and the resulting are not collinear.
Additional Info:
Google the "screw (simple machine)" and select the link for wikipedia.
DeleteJohn,
I explained that force reaction diagrams, with particular attention to those involving the drive screw and its power source, rebut your verbal thought experiment conclusion.
As you asked for our help, i suggest your do as I suggest before asking me to do further reading. If you then put your flow chart on line, I will critique it.
Good luck.
@Korean..
DeleteUnfortunately, I cannot expand on the technicalities because it will appear as promoting my own things (see Sabine's policy on this Blog). So please check my Profile, then go and check the second image which is all about the Thought Experiment.
John, I've read your vixra article. Your problem is that you use hypothetical explanations and experiments as proof.
DeleteI would extend Lawrence's "confusing" assessment to several ot your definitions, but it may come across as offensive and doesn't seem necessary.
A "potentially missing" part of Newton's equations is not the same as a "missing part". The lack of reaction force to the source of your induced internal force really is the flaw in your thought experiment.
My best assessment. Sorry.
I am not completely sure whether Galileo actually dropped balls of the tower of Pisa. I am not sure what one would do with that other than drop things off it. I have read, though I don't think this is established, that Galileo regularly dropped things, balls of different masses sometime tied together etc, off the leaning tower of Pisa. I read he got delight in doing this as the students were escorted to lectures by their teachers.
ReplyDeleteGalileo was if anything the intellectual James Dean of his age, and he loved to pull on the beards of those in authority. He got himself in a bit of trouble, and came within a hair's breath of the same fate that awaited Bruno. Cardinal Bellarmine covered Galileo's tail from the worst the Curia could have dished out. Galileo seemed to be almost playing the martyr card.
Did Galileo actually "drop" things vertically?
DeleteI thought the preferred experimental method was rolling spherical objects down inclined chutes.
The vertical component was therefore slow enough to measure with timekeeping devices of that period, enabling measurement of acceleration due to gravity.
The inclined plane experiments were what Galileo did after he was put under house arrest. This has the effect of "reducing gravity" to g*sin(θ) for θ the angle the plane subtends with the floor. This is the basis for the air-track demo-experiments we do for freshmen students.
DeleteThe role Galileo had with the tower of Pisa is not really clear. He was a bit of a smart-alack who enjoyed pulling on the beards of his contemporaries. I tend to suspect he saw the entire world around him built on superstitious rot. Hey, it sounds a bit like contemporary US of America! He only went so far in saying so. I think he realized the worldview of his time was BS when he pointed his little telescope at the Milky Way and saw it was made of millions of stars. A. C. Clarke used this mental gestalt switch in his 2001 A Space Odyssey when Poole enters the monolith and proclaimed it filled with stars. I could well imagine that he had his fun dropping things off the tower.
Don't you get qualitatively different results from rolling various objects down a ramp than from dropping them freely? The split between angular momentum and linear momentum messes up the comparison of different size objects.
DeletePretty sure Galileo figured out "all bodies fall at the same rate" with a thought experiment where you consider a large body as made up of two smaller bodies glued together.
ReplyDeleteThat's just off the top of my head... It's just so implausible that something as basic as a thought experiment was invented in the West in the last 400 years. Isn't Archimedes's method of exhaustion a type of thought experiment? Isn't Plato full of thought experiments?
Sabine, do you know anything about the supposed 'solution' to the black hole information paradox that was put out recently?
ReplyDeleteSo it is thought that black holes thoughtfully exist, because thought experiments tells us so. I'd made a thoughtfully thought experiment about what theoretical physics thoughtfully assertions are being given to us in the 21st century reference frame. Here we go:
ReplyDelete1. Gravitational waves exist because they are been "seen" by means of biased templates in LIGO "Observatory" and VIRGO (abducted) collaboration.
2. Dark Matter exists, because we altogether believe in General Relativity and don't want it to be debunked, so since we altogether don't want that anomaly existed forever and ever, we invented the dark matter as the savior of our beloved GR theory. We solved the conundrum by definition.
3. Dark holes exist (sorry, I meant black holes), as expected, after a series of thoughtfully experiments were performed and suitable arranged.
4. The speed of light in a vacuum is a universal constant, and its immutable value is by definition c = 299 792 458 m / s. Amen. (wait, are you guys convinced that c is even a real speed?). We altogether believe in GR forever and ever.
In a thought experiment, Einstein was trying to chase a light beam, and he found out a solution, he solved the conundrum by definition, claiming the speed of light must be a constant in every reference frame. The problem is that in his thought experiment the light beam was treated as a stuff that can have a real speed, but it isn't. You cannot chase an emitted photon before it arrived to its target, why not? Because a photon is not a free particle, but a virtual particle (like a quark, it has never been seen outside of its hadronic framework). There is not classical speed, or velocity, for a photon. The propagation of a photon is a quantum phenomenon, and Einstein Relativity can't say anything about it. That's the conundrum of every thought experiment. If you start with wrong premises, you end up with wrong conclusions. And the worst part of it is you always get what you already put into the hat (there's not rabbit hat-trick for the magician), nothing new or unknown will be disclosed.
I don't understand what you're trying to say. Photons are real, not virtual.
DeleteHi Sabine, serendipity made me find your video’s 3 days ago, and I’ve already developed a deep Platonic love for your rationality, creativity, clarity, broad knowledge... You’re a real polymath! I would be even more delighted if you could help me with following question: Besides you I also admire Spinoza and Einstein, and therefore I hate to hear that Einstein was wrong in his EPR thought Experiment. But you gave me hope in one of your video’s on the Bell inequalities : Superdeterminism. If I understand it correctly, if superdeterminism is true (which was believed by Spinoza and I think also by Einstein) then the Bell experiments prove nothing and the EPR paradox can be explained by hidden variables, which I think was Einstein’s alternative for the quantum theory. Am I correct in this?
ReplyDeleteMark,
DeleteI don't know what Spinoza may have thought about superdeterminism, but, yes, if you have a superdeterministic theory then hidden variables are a fine explanation for the seeming randomness of quantum mechanics. There are a few toy models for that which work for EPR-type experiments, but so far no generally applicable theory.
Thanks for your answer Sabine. And I will be even more thankful when you publish your video about superdeterminism, because your article ‘Rethinking superdeterminism’ made me very curious but I can understand only half of it (or maybe much less 😊). Curious because I would like to know whether Spinoza would fully answer ‘I told you so’ after superdeterminism being explained to him. He would certainly agree with determinism in the sense of ‘no other possible worlds’ without denying ‘free choice’, and with the impossibility of fully independent events (‘all things are determining each other’ - as ‘vacuum’ is impossible).
DeleteYou probably know Einstein said: ‘Ich glaube an Spinozas Gott der sich in gesetzlicher Harmonie des Seienden offenbart, nicht an Gott der Sich mit Schicksalen und Handlungen der Menschen abgibt.’
I’ll come back to you after your video which I’m eagerly awaiting.
I find Spinoza's cosmology very interesting. He resolves the Descartian split by stating mind and matter are two essential modes of the world that are part of a whole but not reducible to each other. Which puts paid to Descartes and physicalists in one go. But he also says that these two modes are not the only ones. There are more - an infinite number of unobservable 'hidden dimensions' which rather out dimensions the extra dimensions of String Theory (they were aspects of God/nature, kind of like Newton's sensorial but much vaster).
DeleteWhat I first read about Spinoza was kind of confusing as he's often read as being a sceptic or rationalist. Now I know a bit more it just comes across as reading our own secular world into his. To me, he comes across as a neo-platonist. He makes room for human freedom by denying physicalism. He's deterministic in matter but not of the mind where human freedom resides.
That's my tuppence on Spinoza.
“Einstein argued that there is no measurement that you can do inside the elevator to find out whether the elevator is in rest in a gravitational field or is being pulled up with constant acceleration. This became Einstein’s “equivalence principle ...”
ReplyDeleteEinstein was not right in this argument. Because if an observer in the elevator has a charged object with him, for instance an electron, this object will radiate at acceleration, but not radiate if at rest in a gravitational field. Also relativistic dilation is different in a field and at acceleration.
antooneo,
Delete" if an observer in the elevator has a charged object with him, for instance an electron, this object will radiate at acceleration"
The charge is stationary in the elevator's frame, so from the point of view of the observer inside the elevator, it does not radiate.
Andrei,
DeleteWhat do you mean by “elevator’s frame”? Frame does normally not mean an accelerated frame. So you mean that the observer is accelerated. And what would that cause?
If a charge radiates, it means that photons are emitted. Photons are particles, and those exist for any observer independent of his motion state. And as the assumed acceleration can start at speed=0, also the Doppler may not obscure this photon.
So the observer in the elevator should in any case be able to see the radiation, and so he can distinguish between acceleration and gravity. And this is – besides the other mentioned aspect – violating the strong equivalence principle.
Antoneo,
DeleteRadiation is indeed observer-dependent, believe it or not. The question of "does a uniformly accelerating charge radiate?" has been debated for decades, just google it.
> If a charge radiates, it means that photons are emitted. Photons are particles, and those exist for any observer independent of his motion state.
That statement of yours, is, incidentally, a wrong one in many situations. Nature is routinely weirder than we naively expect.
"Photons are particles, and those exist for any observer independent of his motion state."
DeleteThis is wrong. The notion of a particle is observer-dependent. This is the reason for the Hawking-effect. (And the Unruh effect likewise.)
@ Sabine,
DeleteMany applications and tests of quantum mechanics (QM) involve photons and some require a basis of photon position eigenvectors. In spite of its potential for direct application to experiment, it has been concluded
that there is no position observable or completeness relation for photons.
As well as waves are dependent on the observer - de Broglie.
DeleteHow is it argumented that Hawking or Unruh radiation is something new from internal accelerations in the system? Radiating outwards? If it is enough that system consist of accelerations and particle-wave balance conserves...
With Hawking radiation if the interactional system is in an astronomical scale - then sort of evaporation radiation can be considered as real (but it can be quadro-/octopodal just via gravitational change process). But with Unruh radiation emission-absorption backreactions have easily lengths of only some Planckian scale - hence in balance and no radiation outside.
My suspects are that Hawking-Unruh radiation equations suffer the lack of considering spatial-temporal mass distribution changes. See the Emergent gravity.
The problem here is similar to a result I worked with an old grad school chum last decade. We never published this. He pointed out that a charge in a freely falling frame has an electric field that outside the small frame is deformed. The question is whether this results in the emission of EM radiation. Clearly, for the local observer in a small enough frame there is no change in the electric field that is noticeable. This can be in part understood as due to the near field description of the field. What about the field further out?
DeleteThis can be thought of in the following way. The electric field does deform. However, the charge is accelerating not by a force, but because it is being comoved by frame dragging radially into the central gravity field. The occurrence of a magnetic field occurs from the Lorentz force description and this involves a local velocity relative between to points or test particles. Since the electric charge is being frame dragged by the dynamics of space and not by the acceleration from some force there is no changing magnetic field. This then would imply there is no emission of an EM field.
This does connect in ways with the Unruh-Hawking emission of bosons or photons. While there is no emission of radiation and the EP holds, things get a bit strange with event horizons. Just as an acceleration breaks Lorentz symmetry, a BH event horizon generates two topologically distinct regions with geodesics that cannot be extended into each other by diffeomorphisms. This is a bit of a game changer. This is a fascinating subject to think about.
I am quite confident that physicists are able to invent a theory which says that particles exist for one observer and do not for another one. But is there any need for this weird idea? In my feeling a physical world or theory is better which does not need this.
DeleteBut to make a long story shorter: It is sufficient to have one case where an observer has a charged object with him and can see the radiation. I think we agree that one counter example is sufficient to falsify a theory or a principle. I have in my PhD experiment worked with bremsstrahlung, which was radiation from accelerated electrons, and I could use this radiation very reliably. Other particles, also those in accelerated motion, could react with these photons. And again: one case is sufficient to refute the strong equivalence principle.
And not to forget the other case for this principle: relativistic dilation is different in a gravitational field and at acceleration.
I realized I did not write the last paragraph about geodesics quite right. An exterior geodesic can of course enter the interior by infall. The event horizon is such that you cannot deform an exterior geodesic into the interior and then out. You cannot do a spatial bubble type derivative of an exterior geodesic so some small part of it extends into the interior. An exterior geodesic that extends into the interior is future incomplete there and cannot be diffeomorphically mapped to the exterior. Similarly an interior geodesic cannot be deformed so a piece extends into interior.
DeleteI have only come across Unruh radiation by a very quick look in Wiki following a post or paper by Lawrence a year ago. I did not realise it was so paradoxical. If it really is paradoxical, or even only 'apparently' so, then that is fascinating.
DeleteThree thought experiments.
1. Let Alice accelerate in a small vacuum. She will contain charges which accelerate and emit photons. These photons will ping around the vacuum and raise the temperature. [I assume it is not so spooky that radiation will instantly be created at infinity in a sort of cosmic background radiation. Like a butterfly effect acting instantaneously.] That seems fine and not paradoxical.
2. Let Bob be in a small vacuum and no part of him accelerates. He will not emit photons due to accelerated charges. That again seems fine.
3. Put Alice and Bob in a small room and let Alice accelerate but not Bob.
Alice will heat the space and Bob will not heat the space via Unruh radiation. I suspect that the vacuum condition underpinning 1. and 2. will be violated but maybe only to a small extent by joining Alice with Bob in 3.
For some reason, apparently, Bob does not feel the Unruh heat generated by Alice??? If I put food in a microwave oven then switch it on I expect it to cook. It would be very odd if the food did not cook ...
Austin Fearnley
antooneo: What Sabine is talking about is the Unruh effect. An observer in an accelerated frame finds the vacuum is transformed from a pure vacuum in an inertial frame to a vacuum plus particles. These particles are in a thermal distribution at a temperature of 1K for every 10^{19}m/s^2. This is then not your standard acceleration, but quite extreme. There is also a particle horizon the defines the Rindler wedge and a causal restriction this observer has.
DeleteJust curious: Wouldn't Unruh radiation simply be a very weak version of Hawking radiation, with a continuum relationship between the two concepts as acceleration increases?
DeleteHawking radiation has spherical symmetry. Unruh radiation has a planar symmetry. Sounds innocent but makes a major difference for redshift and asymptotic limits.
DeleteAh! So the Unruh radiation convergence would only be very narrowly defined, specifically towards the surface properties of an extremely large black hole, nominally infinite in size?
Delete… and if Unruh radiation converges at the limit to the infinitely cold event horizon of an infinitely wide black hole, wouldn’t that suggest that it does not really exist?
DeleteYou are conflating Unruh radiation with Hawking radiation. Both are due to event horizons, but with Unruh radiation the horizon occurs in an accelerated frame or Rindler wedge. Hawking radiation occurs with the event horizon of a black hole.
DeleteThat it takes an infinitely large black hole to turn off Hawking radiation is related to the third law of thermodynamics. You can also turn off Hawking radiation and get a zero temperature black hole at extremal condition. This is where the angular momentum parameter is equal to the mass. This is though a condition likely blocked, and physically there is probably some quantum condition similar to a Bose-Einstein condensate where the temperature is very small, but not zero.
Are there physicists who assume that it is the QM what is wrong in its marriage with the GR? What do those theories look like?
ReplyDeletepiotrw
DeleteThere are alternative authors; I think that all those who can be taken seriously are based in the one or the other way on the original approach of Lorentz. I do not know proposals in this direction which relate their arguments directly to QM. However, there are proposals which are from their basic approach not in conflict with QM.
Lorentz was friends with Einstein. He showed Einstein a logical conflict in his theory. Einstein conceded this conflict but never yielded a solution.
The original idea of Lorentz to relativity was not to assume a change of space and time but to use the behavior of fields and the internal oscillation in matter instead of Einstein’s space-time. There is a lot of literature showing how this words for SR, not so much about GR. Alternative approaches to GR which I know use the variation of the speed of light c in a gravitational field together with this mentioned oscillation in matter. Einstein himself, in a paper of 1911, also started in this way, but he used an incorrect dependence of c from the grav. field. This may have discouraged him as he did not continue this way.
Hi piotrw,
DeleteYes, there are systems that put GR as prior to QM.
One such is SRQM, which you can find by Googling "SRQM".
John
@ piotrw,
ReplyDeleteIn GR there is no gravitational force; only curved SpaceTime. So, it is probably enough to settle for a QFT in a curved SpaceTime. This is all one needs empirically for the Big Bang Theory! (For an excellent survey of this theory see: "Quantum field theory on curved spacetimes:axiomatic framework and examples", Klaus Fredenhagen, Kasia Rejzner on arxiv.)
"Einstein’s greatest legacy is not General Relativity, it’s not the photoelectric effect, and it’s not slices of his brain. It’s a word: Gedankenexperiment ... Einstein argued that there is no measurement that you can do inside the elevator to find out whether the elevator is in rest in a gravitational field or is being pulled up with constant acceleration."
ReplyDeleteThen Einstein hadn't created anything. Because he copied the equivalence principle from Newton.
"COROLLARY VI.
If bodies, any how moved among themselves, are urged in the direction of parallel lines by equal accelerative forces, they will all continue to move among themselves, after the same, manner as if they had been urged by no such forces. For these forces acting equally (with respect to the quantities of the bodies to be moved), and in the direction of parallel lines, will (by Law II) move all the bodies equally (as to velocity), and therefore will never produce any change in the positions or motions of the bodies among themselves." The Mathematical Principles of Natural Philosophy (1846) by Isaac Newton
Sabine noted that “… thought experiments are a technique of investigation … But we should not forget that eventually we need real experiments to test our theories.”
ReplyDeleteLately I’ve been busy tracking down papers on Stern-Gerlach (SG), spin-based qubits, and erasability. I was shocked to discover that one of my all-time favorite Feynman thought experiments, one deeply relevant to these topics, has never been tested!
Feynman’s idea was to build a variant SG that remerges the quantized spin beams into a single wave function that is both coherent and spatially localized. To see Feynman’s “… imagined modification of a Stern-Gerlach apparatus”, web-search for Feynman Lectures Vol. III Chapter 5 Section 5-1, then scroll down or search-in-page for Fig. 5–3.(a) .
But is this really doable? For example, the great Bohm’s view was that such a merger would result only in a mix of reduced atoms with classical spin states. In fact, he thought that the magnetic field of any SG would immediately reduce the wave function into atoms with well-defined transverse momenta. Feynman was aware of such views and explicitly disagreed with them. In Lectures Vol. III Chapter 5 Section 5-4, just after the fourth abstract Stern-Gerlach figure, Feynman notes that ‘Some people would say that … we have “lost the information” about the previous state … because we have “disturbed” the atoms when we separated them … But that is not true. The past information is not lost by the separation … but by the blocking masks that are put in …’.
Feynman likely was extrapolating remerger coherence from the Rabi results he describes in the next section. But more importantly, Feynman’s variational QED model seems to have given him had an good eye for discerning the difference between quantum states “interacting” with instruments versus “leaving information” with instruments. In his QED interpretation of SG he simply did not see anything irreversible going on until the wave function impacted the masks.
For whatever it’s worth, my PAVIS or “dark function” model of QM strongly agrees with the Feynman interpretation. That’s not really a surprise, since PAVIS was inspired in part by how Feynman handled information. The transverse momenta that bothered Bohm so much become a fully erasable virtual momentum pair that remains fully quantum unless and until it encounters a classical information vector, such as the masks or an energetic photon. PAVIS also requires that the entire history of particles be created upon this collapse. That’s easier than it sounds, since quantum functions cannot have classical details embedded in their wave dynamics without creating paradoxes. Or more succinctly, you cannot have your wave and eat it too.
The history creation aspect of Stern-Gerlach bothered Schwinger greatly:
‘It is as though the atoms emerging from the oven have already sensed the direction of the field of the magnet and have lined up accordingly. Of course, if you believe that, there’s nothing I can do for you. No, we must accept this outcome as an irreducible fact of life and learn to live with it!’ (Schwinger 2001, ‘Quantum mechanics: symbolism of atomic measurements’, p. 30)
I am genuinely surprised this bothered Schwinger. He had to know from his own quantum field theory that no such history could pre-exist in a quantum mechanical system, although it is more obscure in his approach than Feynman’s. It is only when you insist on sticking particles into quantum equilibrium regions for which no historical records exist that such seeming paradoxes even come up. His adamancy does point out how hard it can be not to assess quantum systems using classical thought patterns, even for folks as brilliant as Schwinger.
---
Bottom line: I believe it would be deeply insightful for someone to build an actual experimental test of Feynman’s remerging variant of Stern-Gerlach. There even exists a modern miniaturized design for it in the 2012 paper “Quantum Eraser Using A Modified Stern-Gerlach Setup” by Tabish Qureshi and Zini Rahman.
Terry, Amazing that our work has been running in parallel without any private communication!
DeleteI too have been considering Feynman figure 5.3c. Well I did not have that particular figure in mind but rather a box in a diagram (that I cannot find now) which he used to aggregate two opposite spin beams. Same effect as Fig 5.3c, I believe.
My approach to this was because of needing a distribution of hidden variables (vectors representing spin directions) in Bell experiment computer simulations, carried out particle-at-a-time. The original beam in Fig 5.3c would IMO have been random-on-a-sphere with vectors having all directions in 3D equally sampled. In my hidden variable model, discussed in the recent FQXi essay, the + beam would not have all directions equally covered. The Malus Law formula allows that distribution of directions to be known. Likewise the - beam would be asymmetric. Bringing them together in fig 5.3c would give an aggregate that was not spherically symmetric in that not all directions in 3D space would be equally sampled. But it would be symmetric in a lesser sense that for every vector arrow pointing in any direction (say) north there would be a counterbalancing vector pointing south. This lesser symmetry would still ensure that any subsequent S-G measurement would result in a 50-50 outcome.
So although I certainly agree that masks result in losing the history, I do not agree that a fig 5.3c arrangement has no effect on the history of a beam. The beam is changed in its distribution of hidden variables. But I do not see any way of detecting this change by a further S-G measurement of that beam.
I am unclear what Schwinger meant, but one way of getting the Bell result of -cos theta is to have every particle spin measured by Bob pre-set (on leaving the oven) to be in the direction of Alice's detector setting. And vice versa for Alice's measurements. That does seem odd unless one makes other assumptions.
Austin Fearnley
Austin,
DeleteThe first figure in Feynman Lectures Vol III Ch 6 Sec 6-2 is a good candidate for the one you described.
I’ll look up your FQXi essay, thanks! I’ve been reluctant to go back to this year’s FQXi essays because I put mine in unedited at midnight, and I’ve been too embarrassed to look at it ever since. Because that is where I first clearly and publicly introduced the idea of the dark wave function, the negative-image of MWI, I should edit and post a cleaner copy someday.
I should note that on one issue we have very different approaches: PAVIS is as opposite to pilot wave and hidden variable views as is possible.
Instead of particles, I only allow quantum equilibria. These are wave packets that cannot get smaller due a lack of sufficient spatial resolution capacity (lack of mass-energy), but also usually resist getting larger due to something I call adiabatic bandwidth. Energy carries information, and it is the bidirectional flow of information encoded into equilibrium adiabatic energy exchanges that keeps thermal matter simple, solid, and classical. Aharonov’s and Gao’s protected wave functions touch on this issue, but they focus on the energy aspect and do not directly address information flows. Alas, without the adiabatic bandwidth concept you cannot develop a complete theory of protected wave functions.
Entities above the various quantum equilibrium points reside in classical time and follow information-dominated causal physics. This is the domain of “world lines” in general relativity. However, in PAVIS world lines are finite in diameter, fuzzy, and only partially deterministic of the future. Thus in PAVIS general relativity is both approximate and emergent, not primal. Ahem. But PAVIS does provide a less infinity-plagued approach to resolving relationships between quantum theory and both relativities. SR becomes a deep set of PAVIS rules, and GR becomes the large-scale topology of the information fabric, which I like to call the Boltzmann fabric out of respect for that great and brilliant man.
Below the PAVIS quantum equilibrium points, which include atoms and nucleons, there are no particles and no histories. There is only “rule chemistry” in which everything is entangled and must eventually sum to null. That includes the universe as a whole, which in PAVIS has an antiverse partner that is hurtling the opposite direction in time. We remain fully entangled with it.
I am surprised how powerful PAVIS can be analytically.
For example, in PAVIS a Stern-Gerlach field adds a momentum-pair creation rule to the wave function of the atom. This does not instantiate the atom, but it does cause its wave function to expand transversely to encompass the new futures implied by the rule. For spin ½, impacting one lobe onto a mask absorbs, scrambles, and distributes its half-momentum into the fully classical instrument along some classically precise xyz angle. By conservation this forces the other entangled half of the pair to pick up all of the remaining rules and thus become a “real” wave packet, an observed atom. Delightfully, this event also creates a history for the particle, making it look as though the particle had been there all along, with exactly the spin orientation needed.
Schwinger was intensely annoyed by this decided-from-the-start history. Bohm in contrast used it as “proof” that there had been a particle there all along, and that the wave function had collapsed early. Only Feynman said “yes it’s weird, but if you just follow the rules it works”. Point to Feynman. Ironically, the very fact that such particles seem to have perfect and impossible trajectories from start to finish is proof that they stayed quantum, since only quantum rules can create entire histories — not just particles — when detection finally occurs.
Feynman was right: The wave function remains coherent, and remerger is possible.
Now someone out there just needs to test it.
Terry
DeleteThe figure in Sec 6-2 of Feynman Vol. 3 is not identical to what I remember, but near enough. I calculate that the split-then-recombine beam apparatus does make the output beam different from the input beam. But how to test that when the output beam and input beams both lead (IMO) to 50-50 results on further S-G measurements? An update of my contest essay is online somewhere, so look for that one instead if you wish for more information on my calculations.
I will look into your interesting references to adiabatic bandwidth and protected wavefunctions. I sympathise with the ideas of bounded from below and above. You need a lot of energy to probe small - agreed. And for a large-scale limiting process, in my dark energy simulation (using negative mass) there is a dark matter process which keeps galaxies fairly stable in size as they accelerate via dark energy. Bidirectional flow seems interesting too as my model has for want of a better word 'backwards causation' in it. But only within antiparticles, and with no backward causation involved w.r.t. macro processes. In common sense, I am averse to re-writing history at the macro level, but if antiparticles have internal backwards causation then I should look further. Decided-from-the-start history is a different matter, and if it is decided from the start then why does it need re-writing?
I cannot think how to make an S-G test to detect the difference between an antiparticle "changing in spin from + to - in reverse time" and "changing from - to + in the universe's arrow of time". There would be a difference in the input and output beams but not testable as every test measurement would confound the results.
Austin Fearnley
This comment has been removed by the author.
ReplyDeleteWell then I think string-theory is an 'oversubscribed' thought experiment :-)
ReplyDeleteZeno of Elea a Greek pre-Socratic philosopher may have been the first in recorded history to do thought experiments. His stuff anticipates calculus and some key physics principles by 2 thousand years.
ReplyDelete@Hardin,
DeleteI couldn't resist when I saw the name Zeno on this blog, I thought to drop a message. Here are some interesting thought experiments:
1.Achilles and the tortoise
2.Arrow paradox
(Wikipedia: https://en.wikipedia.org/wiki/Zeno%27s_paradoxes)
Could someone share with us what is the lesson we take from these paradoxes (thought experiments)? Hint: It is quite obvious.
It's not actually obvious how to solve the problem of motion posed by Zeno. The 'obvious' solution I'm guessing you are alluding to is how it is solved by summing an infinite series of increasing smaller distances or times in such a way that the sum converges.
DeleteThis solution was already referred to in Aristotle who called it an 'adequate' solution but he also remarked that this didn't get to the heart of the question which Zeno was posing. After dismissing Zeno's own position that the paradox illustrated that motion itself was illusory, the Parmenidean position, Aristotle himself took up a position halfway to them. His solution revolved around of his notion of potential and actual motion and of potential and actual reality.
It's this notion of motion and change that Heisenberg referred to in his later philosophising on the question of motion raised by QM. He's also been referred to in the Belgian school of Quantum Logic, for example by Piron. As well as by da Costa and others who are looking to rebuild QM over paraconsistent logics - particularly in South America. It seems they do things differently there.
In Zeno paradoxes reality works up to a scale. When it comes to infinite small space and time, motion is inhibited as mathematical result.
DeleteNow using the law of the excluded middle, we note the following: Since motion is evident in all scales of the Universe and the math work for non-infinite small space and time then, spacetime should be fundamentally quantized otherwise motion would never occur (in all levels/scale of the Universe from quantum to macro-world).
This is what I always supported about todays modern Physics having discarded logic in favor of abstract maths (nothing to do with reality/Physics). Abstract maths that are not limited by physical laws are useless.
By using the above example many unresolved problems in Physics can found a solution (even Quantum Gravity, Unified Field Theory etc without multidimensional useless assumptions).
@John
DeleteThe 'multidimensional useless assumptions' are already actually useful in the standard model. They are modelled as principal bundles and this means that they have fibres - this is where the 'extra' dimensions are. Though I tend to prefer the notion of internal dimensions.
This by the way, has nothing to do with the theory being quantum. It's already important in classical field theory.
Actually, the notion is even useful in continuum mechanics - look up the work of the Cottrell brothers - they added internal dimensions to elastic media - to add new internal dof. This is what the technology of principal bundles do.
About a thought experiment:
ReplyDeleteThere was once a thought experiment which was intended to refute Einstein’s denial of an ether. It caused some discussion at the time, but at the end this experiment was understood not to refute Einstein. I mean the Sagnac experiment.
Because if this experiment is viewed from a lab position, it is in full agreement with SR. And if viewed from a co-moving observer, the results are then different, but as it is a circular motion it is not covered by SR. This experiment was later realized, so it was no longer a mere thought experiment.
However, it has an aspect which was never taken into account and which is in fact a real thought experiment. This means that the diameter of the Sagnac circuit is extended to infinity. In this case the situation of a co-moving observer becomes identical with the situation of an observer moving straight, and in this case the conflict with Einstein’s denial of an absolute frame becomes obvious.
This is a real thought experiment in so far as an infinite diameter cannot be realized, but it can be imagined, and the result is a physical one.
antooneo wrote:
Delete>This means that the diameter of the Sagnac circuit is extended to infinity. In this case the situation of a co-moving observer becomes identical with the situation of an observer moving straight, and in this case the conflict with Einstein’s denial of an absolute frame becomes obvious.
When the diameter is extended to infinity, the circumference becomes a straight line, the beams never meet again, and the experiment cannot be done. No paradox.
For any finite diameter, the answer is known and there is no paradox.
antooneo, in all kindness, don't you ever wonder why all of us physicists who know perfectly good and well how to do the calculation do not share your conclusion?
You are not presenting a thought experiment that, even in principle, as a matter of logic, could ever be carried out.
The idea of a thought experiment is that, at least in principle if not in practice, we could actually do it!
You are employing ideas of limit and logical argument that almost no physicists share.
I know we will never convince you to think like most physicists, but don't you think it might be worthwhile for you to try to understand how most physicists do think and why?
What makes you think that circular motion is not covered by SR? Einstein wanted to explain the aberration(astronomy). In astronomy distances are angles (to be multiplied by an unknown radius) based on circular motion (the earth around itself, the planets around the sun, and so on). Constant circular motion is also described by linear equations, most simple by using Euler's complex e-function e.g. e^(ivt)=cos(vt)+i sin(vt), which is a constant circular motion on the unit circle (with radius 1).
DeleteIt is a common misunderstanding that special relativity is only about constant velocity. It is not. Special relativity is about motion in *flat* space-time. This motion can well be accelerated. In fact if you cannot use special relativity to describe accelerated motion in flat space, then the whole equivalence principle makes no sense.
DeleteDave,
DeleteI should perhaps explain better what a transition to e.g. infinity means.
The Sagnac experiment is a circular motion. The speed of the light beam with respect to this circular motion is c+v or c-v , where v is the speed of the circumference / edge of the Sagnac circuit. This speed is independent of the diameter of the circuit. For a straight motion on the other hand, the speed of the light beam is according to Einstein always c. If now the diameter of the circuit is extended by an increasing amount, this difference between Sagnac and Einstein remains invariantly +/-v . If we now imagine to increase the diameter of the device to the diameter of your galaxy, then the difference between the circular motion and a straight motion will no more be detectable by physical means. But the speed difference between the beam in this device and in a straight motion will remain v . That is the problem for Einstein’s view.
And why is the straight motion according to Einstein always c? That is simply because in a one way measurement one needs two clocks at both ends. The clocks have to be synchronized and Einstein has given a prescription for it. The prescription presumes that c is constant in any inertial frame. So this measurement following Einstein cannot have another result than nominal c as this is circular reasoning in a very basic way.
To the personal part: I regularly give talks about relativity at conferences, and mostly professors of relativity are participating. They normally do not have counterarguments, but a standard argument is “But Einstein has stated … “ Is that convincing? Physics should not be religion.
Gerd Termathe,
DeleteMy argument was about the constancy of the speed of light. In SR Einstein did not treat circular motion. This was criticized by others, for instance by Lorentz. Einstein has conceded that there is a weakness in his theory, but he did not take consequences.
Sabine Hossenfelder,
DeleteThe origin of the different understanding of relativity between Einstein and Lorentz was Einstein’s assumption that the speed of light is ontologically the same in any inertial frame. If we follow Lorentz and assume that the constancy of c is only a measuring phenomenon, then there is no need for this change of space and time or even a curvature. We then stay in the “Copernican world” (Reichenbach).
Maybe that we can describe accelerated motion in flat space (even though Ernst Mach has doubted that). But if the acceleration is not a straight motion, then it is not accessible by SR. This has Lorentz explained to Einstein in 1916, and Einstein has conceded it without taking consequences.
And the (strong!) equivalence principle is anyway invalid as it has been discussed here before.
antooneo wrote:
Delete>If we now imagine to increase the diameter of the device to the diameter of your galaxy, then the difference between the circular motion and a straight motion will no more be detectable by physical means.
Okay, you have clarified what you mean.
But, in fact, in going around the circumference of the galaxy, the light beam is not in inertial motion. SR tells you how to do the calculation, and we all agree that it gives the right answer.
I realize that the curvature of the path is small and cannot be easily measured over, say, a year. But it will take hundreds of thousands of years for the light beam to circle the galaxy, and, during that long time, it will be pretty obvious that the light beam is not going in a straight line.
It is as if you steadily add one microgram of salt to a jar every day, day after day. Sure, in a week it will be hard to detect the change in mass. But, do this for more than a hundred thousand years, and it adds up, to more than an ounce in fact.
Small changes over a very long time can add up to significant amounts, whether it is salt or a light ray circling the galaxy.
No paradox at all. Just common sense.
antooneo also wrote:
>And why is the straight motion according to Einstein always c? That is simply because in a one way measurement one needs two clocks at both ends. The clocks have to be synchronized and Einstein has given a prescription for it. The prescription presumes that c is constant in any inertial frame. So this measurement following Einstein cannot have another result than nominal c as this is circular reasoning in a very basic way.
antooneo, we have explained to you that there are other ways to synchronize clocks, that have been used in practice: the light-ray synchronization is just the easiest method to explain to students.
antooneo also wrote:
>To the personal part: I regularly give talks about relativity at conferences, and mostly professors of relativity are participating. They normally do not have counterarguments, but a standard argument is “But Einstein has stated … “ Is that convincing? Physics should not be religion.
I assure you that they are simply being kind to you: they are too polite to say what they really think of you, just as I and others here have been.
Your arguments do not rise to the point that they deserve to be taken seriously by professionals, and so they don't. But they are too polite to say this really bluntly and to go on and on about it.
That's how most of us scientists treat fundamentalists most of the time, you know. Occasionally, we really get into it with them here on the Web (I have, on several occasions). But, in the real world, it seems better to just say, “Well, everyone is entitled to his opinion!” and not waste time in obviously pointless debates.
I have engaged you and other (I'll be polite) "eccentrics" in this forum more than most of the other scientists here have, partly because I am interested in how to educate people ignorant of science and partly because I am interested in psychopathology. But I too, in the real world, tend to avoid confrontation.
Again: you are showing the general behavior of eccentrics that you are unwilling to assume that probably you are wrong and to try to understand what your mistake is. But even people at the level of Einstein, Feynman, and Steve Weinberg make serious mistakes (I know – I took two years of courses from Feynman and a year-long course from Weinberg).
Anyone – even Einstein, Feynman, or Weinberg – who thinks he has found a serious error in established math or physics should assume he is probably wrong. Because he probably is. Not certainly, but, statistically speaking, probably.
But, you and the other eccentrics who post here do not do that.
Even geniuses make mistakes more often than they discover serious errors in established math or physics.
But you will never take that fact seriously.
Antooneo: I am not sure why you say the strong equivalence principle is false. Several neutron star systems have been examined and the strong EP has been found to hold to 2.6×10^{−6}. The gravitational pull of a triple star system PSR J0337+1715 consisting to two white dwarf stars and a neutron star has lead to these results [ arXiv:2004.07187 ]. The neutron star in a tight orbit with one of the white dwarfs is perturbed by the second in a more distant orbit with lower frequency. The result is the strong EP so far holds.
DeleteLorentz introduced his contraction as a work around the null result of the Michelson-Morley experiment. Lorentz argued the aether in space induced this contraction. Einstein argued the speed of light was invariant and coordinates of space and time transform so there is this affect. Of course, in more generality there are Terrel rotations. The Einstein view is more reasonable for it casts off the baggage of some preferred frame in the world.
A rotating frame is differs from a linear accelerated frame by having a conserved angular momentum, while the accelerated frame has momentum increasing. The metric for a rotating frame with z the axis of rotation is
ds^2 = -(1- ω^2r^2/c^2)[cdt - Ω(r)dφ]^2 + dr^2 + [r^2/(1 - ω^2r^2/c^2)]dφ^2 + dz^2
for Ω(r) = (r^2ω/c)/(1- ω^2r^2/c^2). If you set ds = 0 you can find there is a split horizon similar to the Rindler horizon for the linear accelerated case. Years ago I thought this had something to do with QCD and confinement. Usually when particles are confined in a small region their energy increases. However, I argued there would be this Unruh-like effect that would reduce energy to near zero, so quarks are largely at rest. I though this might have some impact on Regge trajectories. My effort though stalled and I dropped it.
Lawrence Crowell:
DeleteYou are surely correct that there are many physical processes where no conflict with the strong equivalence principle is visible. But in the cases which I have mentioned there is a conflict. Einstein has made the very strong statement that acceleration and gravity are physically identical and only our every-day understanding tells us that there is a difference. And this very strict understanding is Einstein’s basis for his general relativity. If it is now fact that the strong equivalence principle is not generally valid then Einstein’s GRT does not have a basis.
Regarding the Michelson-Morley experiment, Lorentz has successfully explained the result by the contraction of the MM set-up. But in contrast to the opinion of Einstein, the consideration of Lorentz was neither an ad-hoc assumption (Einstein 1905) nor an assumption about the ether. But Oliver Heaviside had found out that according to Maxwell’s theory fields contract at motion. And if fields contract at motion also objects have to contract at motion, so this assumption of Lorentz followed from an existing physical knowledge. It seems that Einstein did not have this information.
Dave,
DeleteSorry, but you have again not grasped the essential of the Sagnac experiment. The essential point is that its deviation from a straight motion according to Einstein is a constant difference of “v”, irrespective of the diameter. And also irrespective of the question how many million pieces of the path are added on. And in a thought experiment it does not matter if the evaluation takes a million of years. So difficult?
The speed of a light signal in a Sagnac circuit deviates from Einstein by a constant amount (in case of a constant peripheral rotation). And that is - by symmetry - valid for any small piece of this circuit. So if we take a little piece of this circuit, which is as straight as we want to have it, and we compare it to a straight piece according to Einstein, then there is always the difference.
I have discussed this case, and particularly the problem of Einstein’s synchronization rule, with one of our leading professors of relativity. He has conceded that Einstein’s clock synchronization leads to circular reasoning. So a one-way measurement of the speed of light does not yield any information about the constancy of the speed of light. At least this very simple piece of logic is also understandable to main stream. That’s a promising first step. Where I doubt that you will get it, as your intention seems only to conserve main stream irrespective of logical problems.
You refer to known physicists who have undergone errors. I have to mention again that Einstein has conceded to Lorentz that there is a logical conflict in his denial of an absolute frame. This is not only the problem that even a person like Einstein may be in error. But this is the essential basic step of Einstein’s relativity, the start into a wrong direction. And to mention again the philosopher Hans Reichenbach, who cooperated with Einstein for a time: Einstein acts in the spirit of Ptolemais whereas Lorentz acts in the spirit of Copernicus. And I (“eccentric”) follow the latter ones.
@ antooneo,
DeleteThe basis of GR is the assumption that SpaceTime has the structure of a 4D pseudo-Riemannian manifold.
Prof. David Edwards,
DeleteI think that it is just the other way around. Nobody would introduce a 4D Riemannian manifold as a replacement of Euclidean geometry without a necessity. Einstein’s intention was to realize GR mathematically on the basis of his equivalence principle. And in the compliance of this principle there was no other way than introducing Riemannian geometry.
It was very analogue for special relativity. Here Einstein wanted to realize his conviction of a constant speed of light in any inertial frame; and he had to fulfill for instance a summation of speeds fitting with his requirement. This task could not be solved without introducing Minkowski geometry.
@ antooneo,
DeleteThere's a difference between the original heuristics that led to a successful theory and how we describe it after we know that it's successful! The big move was actually Minkowski's:
"The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality."
He should have immediately suggested the generalization to 4D curved pseudo-Riemannian geometry!
antooneo wrote to me:
Delete>I have discussed this case, and particularly the problem of Einstein’s synchronization rule, with one of our leading professors of relativity. He has conceded that Einstein’s clock synchronization leads to circular reasoning.
Who?
In truth, anotooneo, none of us cares. Light-ray synchronization is just an easy way to explain it to students. In the real world, we have used other methods and found them all to be equivalent.
antooneo also wrote:
>I have to mention again that Einstein has conceded to Lorentz that there is a logical conflict in his denial of an absolute frame.
How on earth can we get through to you that no one cares what Einstein or Lorentz said or thought. We care about what current experimental results show.
And they do not support your position.
antooneo also wrote:
>The essential point is that its deviation from a straight motion according to Einstein is a constant difference of “v”, irrespective of the diameter. And also irrespective of the question how many million pieces of the path are added on. And in a thought experiment it does not matter if the evaluation takes a million of years. So difficult?
Not difficult at all. You have not shown mathematically that there is any problem.
And, in fact, you know there is not any problem: you have emphasized again and again that your Lorentzian relativity gives the same observational results as special relativity.
QED.
You are making a fool of yourself. You keep citing unnamed physicists and supposed discussions between Einstein and Lorentz that you never bother to document.
But even if you did document them, no one cares.
You yourself have stressed again and again that any actual experiment agrees between special relativity and Lorentzian relativity.
So, you have conceded the only point that matters.
Why do you do this? Why do you keep arguing there is an observational problem when you yourself long ago conceded that there is not? What game are you playing? Is this all just some sort of very, very sick joke?
Frankly, I think it is.
And speaking of testing, here’s my first PAVIS experimental prediction:
ReplyDeleteIf it is true that Stern-Gerlach (say for spin ½) works by adding one “momentum-pair excitation rule” to each atomic wave function that passes through it;
Then the excitation process will transfer twice as much energy to the wave function as will be found in the atom when it is finally detected.
The other half of the transferred energy will end up in the blocking mask… that’s right, in the blocking mask.
Both the initial double-size energy transfer to the atom and the single-unit transfer to the mask will be fully classical events, and thus detectable by using sufficiently sensitive experimental setups.
I am not aware of any name for this rather novel mode of real-energy transfer, so I’ll invent one: a vacuum phonon. That’s not entirely accurate of course, since the “vacuum” must initially be occupied by a quite real wave function. But once the momentum has been transferred into the blocking mask, all of the other properties (rules) of the atomic bundle will default to some other spatially distant lobe, where they will form a localized atom. Thus after the momentum has been transferred, there will be nothing left behind but empty space: a vacuum phonon.
The concept of a vacuum phonon is… well, a rather fascinating possibility, yes? Note that despite its similarity to phonons it is not a quasiparticle, since it can be conveyed from a single fundamental particle in the vacuum. Also, it has quantum numbers that do not correspond to any other fundamental particle. Put those two features together and, oddly enough, it classifies as an addition to the Standard Model zoo, one that is very low-energy and accessible to almost any laboratory, even if it is hard to detect.
Furthermore, vacuum phonons would forever give lie to the idea that blocking masks “do nothing” when they create or destroy wave functions in quantum interference experiments. In the PAVIS interpretation all paths are used, just with severely asymmetric bundling rules that result in one path getting “almost nothing”, just tiny blip of momentum, and some other path getting “almost everything”, such as the full atom minus that blip of energy. But both paths would get real-energy particles that are both detectable and manipulatable. Fascinating…
If either (or both) vacuum phonons and double-momentum quantized energy excitations of atoms in Stern Gerlach can be experimentally validated, it would provide a powerful incentive to reinterpret pretty much all of quantum mechanics in terms of rules, as opposed to particles. It would also show the relevance of at least some aspects of the PAVIS model. Finally, it would profoundly change the tenor of quantum interference experiments. Blocking masks would never again be viewed as “inert” components. Instead, they would become active, sensitive, and customizable components of the interference process. This would surely enable new directions for manipulation and testing in a wide range of quantum experiments.
And finally, as a minor side effect, it would add a new particle, the vacuum phonon, to the Standard Model… and do so at a really, really good price.
Since I put the vacuum phonon “wild hypothesis” out while thinking in real time, here is an important update with some interesting experimental implications:
DeleteAt least for the case of a spin 1 atom, say a silver atom, with spin 0 along the z axis (which I’m pretty sure is identical to saying it has a specific, superclassical spin in the xy plane), the specific type of rule excitation that the SG adds to the silver atom wave function is a virtual photon pair rule. Each virtual photon is circularly polarized and oriented up or down along the z axis, which when mixed (rule chemistry) with the atomic bundle imparts both momentum and +1z or -1z spin to the two SG lobes. Since in rule chemistry the photon pair gets bundled (“absorbed”) by the equally quantum atomic rule bundle, the virtual photons only modify the evolution of the evolution of the atomic bundle. Thus the virtual photons do not radiate at light speed, at least for as long as they remain in the bundle.
Here’s the critical predictive part: if the silver atom is detected at a particular location and time in the upper +1z lobe, then the lower z lobe should both dissipate and “emit from empty space” a -1 spin circular polarized photon headed in the z-down direction. The energy and frequency of this photon will be defined by the transverse momentum of the atom in the +z direction.
Thus it will look exactly as if the atom had been excited sufficiently by the SG to emit a downward photon, which in turn gives it both upward momentum and z axis positive spin. The only difference, really, is that the photon will be emitted from a region of space macroscopically distant from where the atom was found.
So alas, after accounting for the spin pairing that is also part of the problem, it looks like the PAVIS rule excitation hypothesis does not for a silver atom produce a spin-0 pure-momentum particle that is not in the Standard Model. But it does postulate emission of a photon from a nominally empty region of space in which there is no atom to emit it. If there is a name for this prediction of photon emission from a region of space where the atom is not, I am unaware of it.
... one apt name might be emissive ventriloquism, or "photon casting": The emission of a photon from a lobe of an atomic wave function that is distant in space from the lobe in which the atom is eventually detected.
DeleteThis interpretation sounds less radical, since it is at least akin to known peculiar wave function separations such as Sin-Itiro Tomonaga's theory of spin-charge separation in condensed matter.
However, please make no mistake: If you entertain as plausible my silly idea that atomic wave functions can be raised to excited energy states and remain coherent, via linear addition of "virtual pair rules" whose properties other than energy sum to zero; then you must also entertain as plausible a clear and experimentally disprovable violation of standard quantum interpretation.
The violation is this: If you block or mask one path of a coherent atomic wave function, then for every case in which that path does not detect an atom, a sufficiently sensitive set of detectors should instead observe a transverse photon whose momentum precisely cancels the transverse momentum needed to shift the atom to the path where it was found.
A real experiment, originating at a particular point at a particular moment, will unfold histories from that point and that moment on into each and every direction. I am afraid that thought experiments don't do that and therefore lead to the conclusion that time is an illusion.
ReplyDeleteMy thought experiments are, does gravity become repulsive at a certain distance?
ReplyDeleteWe know this to be the case. The vacuum has a zero point energy or density of energy that is everywhere. The result of this is that gravitation is repulsive. The universe is accelerating outwards by being in a sense repelled by gravitation.
Delete@Sarnowski:
DeleteAs Lawrence Crowell points out the vacuum energy makes gravity repulsive. There's an interesting history behind this. It's due to the introduction of an additional term in Einstein field equation for gravity which is generally referred to as the cosmological constant, lambda. Really it ought to be called the cosmological force as we don't refer to gravity by G, the gravitational constant!
It's this cosmological force that's responsible for the repulsion. Einstein introduced to get a static universe because he thought that's what the universe looked like. This meant a Lambda had a small negative value. But then Hubble showed that the universe was expanding. So he binned it. And most cosmologists since then have ignored it - well until recently. Because it was shown within the last decade or so that not only was the universe expanding, its expansion was accelerating. The simplest explanation then is to reintroduce the cosmological force with Lambda now a small positive value.
So mot only did Einstein show that gravity was an effect of curvature he discovered a 'fifth' force to add alongside gravity, electromagnetism and the weak and strong force.
But there's more interesting history about repulsive gravity that I learnt about from one of Paul Davies book.
Roger Boscovitch, a Croation physicist at the turn of the 17th C (and also a Jesuit priest and diplomat) suggested that gravity became repulsive at very small distances. This actually follows if one takes Newton's law of gravity seriously. This is because gravity is inversely proportional to the distance separating two masses. This means that the strength of gravity increases without limit as the separation distance becomes zero as it must if two masses are to touch each other.
The only way to avoid this gravitational singularity is if they never touch. But this means there needs to be countering repulsive force at very small distances. Thus Boscovitch suggested that gravity became repulsive at very small distances!
We now call them interatomic forces and we know them to be due to electromagnetism. But of course, if one thinks of a single unified force, and we call that gravity then this 'unified' gravity, as Boscovitch points out, does become repulsive at small distances. This might seem a little contrived but I don't think Boscovitch was wrong and it was a remarkable argument. Remarkable enough that his works were studied by the likes of Faraday and Kelvin. It's remarkable that Boscovitch suggested that they were there more or less by a thought experiment, and well before they were actually established by experiment.
One final point. This argument by Boscovitch isn't new. There is a very similar argument by Aristotle to show that two atoms, conceived as hard and impermeable substances, couldn't touch. He used the argument to critique the theory of Democritean atoms. Given that Boscovitch was a jesuit, and he was interested in physics, I'd be curious if he was influenced by this argument from Aristotle's Physics/Metaphysics.
My thought experiment is that the constant is not constant, but has a curve that approaches a constant for awhile and also there would be other variables of different smaller magnitudes.
Delete@Lawrence Crowell said: "We know this to be the case. The vacuum has a zero point energy or density of energy that is everywhere. The result of this is that gravitation is repulsive. The universe is accelerating outwards by being in a sense repelled by gravitation."
DeleteI'm embarrassed to say that this is something that has befuddled me, like forever. How is it that forms of positive-energy density like baryonic matter and electromagnetic energy cause contraction of the metric, while Lambda, also possessing positive-energy density (albeit mininscule on a per cubic centimeter scale), causes the metric to expand?
I'm thinking in comparison to a hypothetical Alcubierre warp, where the energy density is negative behind the ship causing metric expansion, while the energy density is positive in front of the ship causing metric contraction.
I'm clearly missing something in the chain of logic.
I worked out how it is that a positive vacuum energy can induce accelerated expansion using just Newtonian mechanics. It put a link below in stack exchange where I worked this out. It is a bit odd, but if there is a constant vacuum energy the result is this repulsive gravity. I leave that as the main answer, for I do not want to repeat typing the math here; I have to get on with the day!
DeleteOne might ask how it is that just plain vanilla Newtonian physics can capture this. It is not the entirety of the physics, particularly if the spatial manifold is a closed sphere or a hyperboloid. It may also be uttering a whisper to us something about the observable universe being on a holographic screen.
Spacetimes with a negative vacuum energy, such as an anti-de Sitter spacetime with ∧ < 0 admit closed timelike curves. The case above with the FLRW or de Sitter-like spacetime with ∧ > 0 does not do that. The main difference is that sort of topology.
https://physics.stackexchange.com/questions/257476/how-did-the-universe-shift-from-dark-matter-dominated-to-dark-energy-dominate/257542#257542
@Lawrence Crowell:
DeleteThanks for referring me to your write-up on this issue over at Stackexchange. I can see immediately that I have to put my thinking cap on, as it's far from trivial. But with hurricane Isiais behind us, here in New England, it'll be easier to think without worrying about trees coming down on my house from the terrific winds.
@Lawrence Crowell:
DeleteJust read your explanation, and noticed directly below another explanation by “Thriveth”, called the “The short version”; just 6 sentences in length. Assuming his explanation captures the essence of what you wrote, it was far easier for me to understand, but raised questions in my mind, which will likely be answered when I fully grasp the more technical explanation you provided. Also, I completely forgot that early last week, I started watching a Youtube video on Dark Energy by DrPhysicsA. The video is just over an hour in length. DrPhysicsA’s wonderfully crisp English accent, and perfect enunciation, makes it exceptionally easy to follow his presentation. Well, for sure, that’s on my bucket list for watching.
Also, returning home from a 4 day visit with relatives this past weekend, I came across another awesomely good Youtube physics author - Mark Newman, from Israel, with a 6 Lecture course on the Fourier transform. I watched the entire course over several evenings. I was long aware of Fourier’s basic premise – that any waveform like a sawtooth wave, square wave, etc., can be built from a series of fundamental sine waves of different frequency, amplitude, and phase. But Mark begins with the very basics working the student through the origin of the base of the natural logarithm “e”, and the need for the invention of imaginary numbers, Euler’s Identity, etc. His way of presentation is absolutely fantastic, with amazing animations.
Without doubt, such online courses can provide an excellent foundation for today’s older students, who have forgotten much of what they learned in High School and College, to indulge in their own Gedankenexperiments, once they understand the present boundary of what is known of the Universe’s workings.
@Lawrence Crowell:
DeleteThis is just a quick note, as I have to clean up the debris outside from last night’s tropical storm. The videos by Mark Newman really opened my eyes to the incredible advances in mathematics and physics achieved by the likes of Leonhard Euler, the Bernoulli brothers, Joseph Fourier, Isaac Newton, Karl Gauss, and a host of others, too many to name. This incredible burst of pan-European creativity spanning centuries is something I wish was emphasized more when I attended college back in the late 60’s, early 70’s, as it really gives a human dimension to the evolution of science’s foundations. Reading dry mathematical/physics formulas minus the context within which they arose can lead to confusion in a young student’s mind, as it all just seems to have popped out of nowhere.
Anyway, this is just so wonderful to have online courses and blogs like Sabine’s that go into great depth on the history/development of science/mathematics and explain complex idea’s to a wide audience. This dissemination of knowledge via the medium of the internet to the masses is bound to impact future advances in the understanding of nature.
The explanation you cite below mine in Stackexchange just tells you why ordinary matter dilutes away and eventually vacuum energy or dark energy dominates. It does not tell you why this is repulsive.
DeleteIf you are interested in lectures on 1st and 2nd year level graduate school mathematics there are Daniel Chan's Adventures in Advanced Mathematics. He gives real white board lectures on these topics, and is very clear. I have found these to be really good reviews of this material, where in physics we do not breathe this in as deeply and constantly as mathematicians do.
@Lawrence Crowell:
DeleteThanks for the links. Don't have time to think about the expansion at the moment, as I'm just leaving for a 4 day visit, and have to beat rush hour traffic. So, I won't be able to reply till probably Monday. But, if I'm not mistaken, is that this repulsive effect occurs even without Dark Energy. Gotta run.
Lawrence Crowell:
DeleteI’m back home again after a somewhat hectic 4 days. One of my brother’s had a scheduled hernia operation, so my other brother and I took care of long distance driving of two vehicles and stayed at his house 4 days to assist him during the recuperation period.
My intent, before the 4 day diversion, was to understand how “repulsive” gravity (Lambda) arises naturally in the mathematical treatment that you wrote. I still need to study that more. But a question I have - is the FLRW metric a pre-condition for this to happen? The reason I ask is that this morning I came across a theory I had never heard of before, which dispenses altogether with the FLRW metric. It’s called “Timescape Cosmology”, originated by David Wiltshire in New Zealand. Assuming it’s OK I’ve linked the article below from Physicsworld, titled “The Dark Energy Deniers”, dated 19 June 2018. The basic idea is that rate of passage of time in the great voids would be about 35% faster than at the edges, where the galaxy filaments/walls are located. The resulting greater time that has passed in the voids then produces the illusion of an accelerating Universe.
Wiltshire’s model, according to the article, has major trouble fitting the CMB spectrum peaks. But he then invokes a “backreaction” mechanism to, I assume, add enough ‘oomph’ to make the model fit the CMB power spectrum. Funny thing, I had an idea a few years back, with elements in common with his idea, minus the time factor. I suddenly realized this morning that the mechanism behind this, albeit amateur idea, could obviate the need for backreaction. But, as per rules, I won’t go into it.
Meantime, as I have Peter Collier’s “A Most Incomprehensible Thing”, which the title states is “a very gentle introduction to the mathematics of relativity”, I can bone up on my math deficiency to better comprehend your Stackexchange explanation for the origin of Lambda (I’m wondering if basic HTML code will work to display Greek letters, as I think I’ve seen those displayed in the comment section).
https://physicsworld.com/a/the-dark-energy-deniers/
The FLRW metric is not a prerequisite, nor is the de Sitter. Both have this types of expansion. My derivation is pure Newtonian in fact. It is not general relativity, and yet it captures the main features of this exponential expansion cosmology.
DeleteThe biggest departure is that general relativity predicts a term -k/a^2 which as the expansion grows and a → ∞ becomes negligible. For k = 1 the spatial surface is a sphere that exponentially expands, k = 0 is a flat R^3 space and for k = -1 it is a hyperbolic saddle shape.
I can't comment on Wiltshire's model. It strikes me as probably not right.
Depending on the circumstances masses absorb and/or emit energy.
ReplyDeleteProf. David Edwards
ReplyDeleteFor the history, we should look into the original paper of Einstein of 1905. There he has deduced the equations which yield the summation rule so that any transformation of speed transforms c into c. Minkowski has formed an elegant mathematical formalism from it, and of course, he was proud of it. Einstein on the other hand did not like it in the beginning, but latter accepted it.
For GR the history is also clear as Einstein has written a lot about the development of this theory. His starting point was the equivalence principle which he judged to be a great insight. But then he had to find a math formalism for it. That was not so easy for him and it was done in cooperation with David Hilbert. After they came to a result (i.e. the Riemannian geometry), there was a controversy between both about who made the essential contribution.
So, in both cases the math system had to adapt to comply with the theory.
Dave wrote:
ReplyDelete>> How on earth can we get through to you that no one cares what Einstein or Lorentz said or thought. We care about what current experimental results show.
>> And they do not support your position.
Is it not essential that Einstein and Lorentz have both conceded that Einstein’s relativity is in conflict with the reality? Do you know better ones to judge this? – And this is exactly my position – in case you have understood it.
There are two means to measure the speed of light: the one-way measurement and the two-way measurement. The one-way measurement needs two synchronized clocks. Einstein presented his synchronization rule and was aware of its weakness. (“It is an assumption, but as there are no logical conflicts visible, it may be good.”) Yes, there are logical conflicts, for instance the Sagnac experiment.
And the two-way experiment (like MM). That was explained as not informative by reference to Maxwell’s theory. I do not repeat it.
The math for the Sagnac experiment? Here is not the space to present it, but it is very simple school math, so everyone can do it easily.
>> And, in fact, you know there is not any problem: you have emphasized again and again that your Lorentzian relativity gives the same observational results as special relativity.
Very fine, so why do we not go this way? Lorentz works with Euclidean geometry, so Lorentzian relativity can be taught at school. – And this is by the way also the case for GR. In the version deduced from Lorentz it can also be given at school. Isn’t it great? It is as easy as the transition from Ptolemais to Copernicus, just as Hans Reichenbach has said it.
>> You are making a fool of yourself. You keep citing unnamed physicists and supposed discussions between Einstein and Lorentz that you never bother to document.
>> But even if you did document them, no one cares.
What are you telling here? I have offered you repeatedly the source of Einstein’s letter etc including a facsimile of his handwriting. You did not react. Seems that you do not really want information.
And no one cares? I refer to a web site about alternative GR (gravitation) which was accessed about 80’000 times last year and accordingly the many years before. My professor once said to my intentions: The people want understandable theories. Remember: Copernicus gave us an understandable theory. There is a way.