I learned yesterday from Markus Risse for example that the Auger Collaboration has a paper in the making to fit the penetration depth data which has earlier been claimed could not be explained neither with protons nor heavier ions or compositions thereof. Turns out the data can be fitted with a composition of protons and ions after all, though we'll have to wait for the paper to learn how well this works.
Today I just want to pick up an amusing remark by Holger MÃ¼ller from Berkeley, who gave the first talk on Monday, about his experiments in atom interferometry. He jokingly introduced the "Craziness Factor" of a model, arguing that the a preferred frame, and the thereby induced violations of Lorentzinvariance, have a small craziness factor.
Naturally, this lead me to wonder what terms contribute to the craziness factor. Here's what came to my mind:

+ additional assumptions not present in the Standard Model and General Relativity. Bonus: if these assumptions are unnecessary
+ principles and assumptions of the Standard Model and General Relativity dropped. Bonus: without noticing
+ problems ignored. Bonus: problems given a name
+ approach has previously been tried. Bonus: and abandoned, multiple times
+ additional parameters. Bonus: parameters with unnatural values, much larger or smaller than one, without any motivation
+ model does not describe the real world (Euclidean, 2 dimensions, without fermions, etc). Bonus: Failure to mention this.
+ each time the model is being referred to as "speculative," "radical" or "provocative". Bonus: By the person who proposed it.
+ model has been amended to agree with new data. Bonus: multiple times.

 problems addressed. Bonus: Not only the author worries about these problems.
 relations learned, insights gained. Bonus: If these are new relations or insights, rather than reproductions of findings from other approaches.
 Simplifications over standard approach. Bonus: If it's an operational, not a formal simplification.
 Data matched. Bonus: Additional predictions made.
40 comments:
What is an "operational simplification"?
I meant something that typically combines two concepts into one. Say, dark matter and dark energy is actually the same thing, looked at differently or something like that.
As always, your take on this kind of thing is more sensibly restrained than can be found in other places. I think your final paragraph gives only part of the reason why such lists live dangerously (although I agree that a theory has to come to seem not crazy for it to be useful for engineering). They also don't engage much with the details of what makes a theory "not crazy enough to be true".
As an aside, I'd prefer to distinguish between theories and models, insofar as Theories typically have relatively few parameters, but Models of a Theory generally have many parameters.
Bolyai dinged Euclid, Maxwell dinged Newton, Stern dinged Dirac, Yang and Lee dinged particle physics, Chandrasekhar dinged Eddington, and general relativity required the first GPS satellite. One must look. Photon vacuum symmetries' theory furiously fails toward fermionic matter. GET CRAZY! Directly test spacetime symmetries toward fermionic matter with enantiomorphic crystallographic space groups.
Theory demands all swans are white; black swans are symmetry breakings. Observation says default black swans; white swans are deletion mutations. Stop whining  sequence the genomes.
(Bee, do you have an app to render the security test readable?)
Hi Bee,
I'm wondering if I watch Perimeter's videos closely if I will be able to spot the exact moment when you reply on your laptop or iPad to a commenter here. That would be interesting. You should do it when the camera is looking at you and smile and look up at that time. Of course since we are all introverts who comment here you don't have to since we would probably all look away from our computer screens to avoid eye contact. Ha Ha!
very best,
Eric
Of course always interesting to see this type of QG conference coverage.
Is Tvo being employed in the development of the lectures for later archival material referencing?
Best,
Your tallying system seems awfully familiar.:)
Where's John Baez when you need him?:)
Hi Bee,
I think this craziness factor has greater implication for the world in general than it does for theoretical physics :)
Best,
Phil
Not to be critical at all, because it is important that these things are discussed, but I think it's wrong to define "crazy" as being "different to the standard model" as you did in your list, since this suppresses progress. The sole criterion for any model should be "does it efficiently predict nature". In my view any parochial notions of craziness should be left out of it. Probably the universe IS crazy by our standards: look at quantum mechanics :)
Mike: The point of the 'craziness factor' was this. A craziness of zero doesn't get us anywhere because we'll just repeat what we already know. But too much craziness isn't good either because then it becomes increasingly implausible we'll find something new. So there's some intermediate craziness level that is necessary for progress. Needless to say, people widely disagree on how much is too much ;o) Best,
B.
Point taken, thanks. My point was that models should be judged solely on the closeness to the data, not on the closeness to the expectations of the human mind. So I was just suggesting that craziness indices are looking in the wrong place. Thanks for clarifying. Interesting post..
Mike, Closeness to the data is nice, but that criteria alone does not rule out epicyclic models. Ptolemaic models would not be ruled out just by closeness to data. The move to Copernican models, however, and even more the subsequent move to Keplerian models, were perhaps necessary precursors to Newtonian models, which surely seem more explanatory, which is generally held to be a good. One also looks for other merits, particularly tractability, which is essential for engineering uses of a theory.
At this point, one could imagine and hope that we might go straight from Lagrangian QFT, which is looking increasingly epicyclic, to a more explanatory model, however it is perhaps also worthwhile looking for a different form of epicyclic model that might act as a halfway house to a more explanatory model, or might give illumination to Lagrangian QFT that is different enough to allow a more explanatory structure to be fround from the Lagrangian QFT starting point. I suggest that different researchers should follow their own lights in this particular question.
I take it that one key is to look for characterizations of how the modeling range of any new class of models differs from the modeling range of Lagrangian QFT that are as detailed as possible (not an easy task for any theory that is as capable as Lagrangian QFT, particularly since the latter is not mathematically welldefined), which points towards experimental tests without necessarily giving something as explicit as a prediction.
Peter, I agree: simplicity is good. I did say in my first comment "..does it efficiently predict nature".
I think another important quality for a theory is that it uses quantities that can be directly observed, an old example is: defining motion relative to other masses, rather than absolute spaces (Mach suggested this). Partly for this reason I am wary of extra dimensions, dark matter and suchlike.
Right, single words nicely chosen do cover a multitude of possibilities.
Your comment points to somethng that (perhaps over)much concerns me, that when we say that we "directly observe" a quantity, we always implement a measurement that is not in fact direct, but includes such corrections as we feel justifiable, as when we use a diffraction grating that is slightly imperfect, and certainly finite, to measure wavelength, say. We characterize such a grating by shining a calibrated source through it, say, that we have measured using other imperfect measurement devices; in a few decades, however, that characterization will be seen as imperfect.
This is fine, characterization of experimental apparatus is a back and forth of improvements of preparations and measurements, but I think it leaves open the possibility of qualitatively different characterizations both of preparations and of measurements (or, specifically, how we represent how a given preparation or measurement differs from a given ideal).
My take on the fundamentals of this is to observe that QM is bilinear, that measurement statistics have to determine both measurement operators and preparation operators, which makes the overall experimental procedure an iterative procedure of improved characterization of measurement devices, of preparation devices, ... . Well, this seems worthwhile to think about to me, but it's not clear whether it can be made into worthwhile mathematics.
@Peter What is wrong about Lagrangean QFT ? The standard model is an example of a theory that can be obtained in this way and it works extremely well. It's case is particularly strenghtend by the recent discovery of the higgs. So why the hell are you compaining about Lagrangean QFT?
By the way, Mike is exactly right and this strange Craziness factor looks in the wrong direction. If nature IS more complicated than some people (such as advocates of this craziness factor for example) would like it to be, a honest scientist has to accept it without complaining. It is as it is, period !
Nemo, indeed, many physicists are happy with the mathematics of the renormalization group; others, including myself, are not happy enough. It's a common enough reservation amongst mathematical physicists that I'm uncertain why so much brimstone should attach to it, although you are welcome, as far as I'm concerned, to express your thought that renormalization is entirely acceptable as mathematics.
Indeed, nature can be as complicated as it likes, and we'd better like it, but our descriptions of nature have changed over the years and perhaps, or perhaps not, may change some more.
Bee, I apologize that I've drift OT.
Nemo: The question is where to look for new effects. We can't do everything we would like to, so the procedure is that we look where it seems most promising. But what means most promising? It shouldn't be too conservative, then there's nothing to be found. And it shouldn't be too crazy either. Best,
B.
Peter, I'd better not reply in too much detail to your interesting point about direct measurement, I guess it is off topic here. Suffice it to say that I'm interested in whether things are measurable in principle and how that affects physics.
I'm certain that things would have been much closer to optimal if von Neumann hadn't died until the 1980s.
Nice conference. I am slowly going one by one through the conference. btw the pirsa still is not properly updated for talks from this conference.
Hopefully you or one of the LOC/SOCcan get this fixed.
If you click on conference/school tab from 2012
from http://pirsa.org/
this workshop doesn't show up.
You can only see the talk if you go to weekly updates.
But it shows all talks as part of a collection from a 2007 conference
http://pirsa.org/C07024
Hopefully these things will be fixed.
I
The question is how long you should keep falling in the rabbit hole?
Is there a land of wonders out there?
What frustrates me is that we don't have a clue.
That bathers me...
Some answers to Giotis.
When you are tired of falling down the same old rabbit hole you can stop anytime you want.
There is an infinite cosmos of wonders out there.
We do have a clue. If you look at nature without untested assumptions obscuring your vision, then it is selfevident that nature is an infinite discrete selfsimilar fractal.
There is no other paradigm or theory that comes remotely close to this discrete conformally invariant cosmological paradigm in terms of exquisiteness (Sagan's choice of words) or potential for unifying ALL of physics.
But you must be willing to consider new ideas and new assumptions. And you must be ready and willing to be a student again.
Robert L. Oldershaw
Discrete Scale Relativity
Hi Robert,
I not so sure the fractals will progress infinitely. For example, if we take the the current cosmological constant as the current vacuum energy density it is a tiny, tiny figure. Especially compared to the energy density at which the funds,metal particles formed.
If one assumes each larger fractal particle consists of particles made previously then it at some point the acceleration of the universe will come to a stop. This can be inferred from the fact that the acceleration of the universe is getting steadily smaller and the particles are already up to galactic size and super galactic size. I think the main lesson to be learned from this is that the energy of the universe is finite and dark matter and dark energy are inversely related.
If one postulates that at the start it was all energy and no matter AND the two are inversely related the universe will end up as all matter at the point the acceleration ends. It is anyones guess what will happen next. One you start depending on infinite level of fractals occurring you are subtly getting sucked into the hyperbola that people always seem to get sucked into.
Hi Eric,
Briefly, Discrete Scale Relativity leads to revised values for the vacuum energy density (cosmological constant), Planck scale, etc.
There are many models for the observed acceleration, including that is due to cosmological inhomogeneity, modified GR, that it might be only apparent, etc.
Moving quickly to the bottom line, if your reasoning is based on incompletely tested assumptions underlying the old paradigm, you may be led astray in your conclusions, such as a "finite" cosmos.
We should study the morphology, kinematics and dynamics of observable systems.
We should not be using vague and poorlytested theoretical ideas as our main reasoning tool.
Study nature, not books.
RLO
Discrete Scale Relativity
@ Nemo:
What is wrong about Lagrangean QFT ? The standard model is an example of a theory that can be obtained in this way and it works extremely well. It's case is particularly strenghtend by the recent discovery of the higgs. So why the hell are you compaining about Lagrangean QFT?
Just a short note  QFT and the Standard Model suffer from a problem wrt. to obtaining a correct semiclassical limit. It is the situation where the number of inequivalent Feynman diagrams eventually outgrows the power of the coupling constant (in QED this starts happening around 137th order of perturbation theory). The result is that the very deep quantum effects have bigger contribution to the resulting amplitude than the classical effect does. So there is a problem in recovering the correct classical limit of a given QFT (Standard Model included).
People generally assume that quantum gravity corrections will step in long before the precision of 137th order of perturbation theory, but this doesn't solve the problem, but just moves it over to the quantum gravity domain, where it still needs to be solved.
So a QFT is an effective description at best, and it cannot be considered fundamental. It is illdefined, if taken completely seriously.
HTH, :)
Marko
This is not a crazines  but a solely rational approach of the community of mainstream physicists, which are forced to generate the jobs for itself, but the real solving of physical problems would terminate this job.
So it generates the substitute problem for itself, calls them "crazy" to pretend, they're more original and outof box, then they actually are in an effort to attract the atention of another researchers (a tabloism)  while quietly ignoring all noncrazy solutions which could lead to premature end of research. So that the various people are travelling from Germany over Sweden into Canada or Australia while pretending important work and everyone's happy with his job until money are going.
Hi Robert,
Usually major advances in physics , like Newtonian gravity to GR, involves more accurate predictions over a larger regime while minimizing the fundamental changes in physics principals that enable those advances. In GR it involved nonsimultaneity of time (from SR) along with non knowability if you are accelerating or in a gravitational field. But even with the non knowability you can still know if you were in one or the other. That is the paradox of the twins in which the one that accelerated and then returns to his twin is much younger than his twin.
So even in GR there is a hidden principle of time dilation occurring in an accelerating universe, but it cannot be directly observed because everything is accelerating evenly. There is no stationary twin. If you think through this paradox it really does mean that the vacuum energy is finite. It is reduced in energy around the accelerated object because energy density and the flow of time are dual and inverse. It leads automatically to a finite universe.
I think of an infinite universe as not being much different from a multiverse. It seems to me like saying you can always have what you want because the energy is
infinite. Remember, if the energy was infinite time wouldn't dilate if you are accelerated. Energy is required to accelerate an object and if the energy was infinite the flow of time would not change in that process.
" It [energy] is reduced in energy around the accelerated object because energy density and the flow of time are dual and inverse."
Make it that they relate directly and not inversely. The higher the vacuum energy density the faster the flow of time. I had been thinking about what i said before about dark energy and dark matter being inversely related.
An infinite fractal model cannot be properly evaluated on the basis of nonfractal 20th century kitchenphysics.
Moreover, an infinite cosmos is certainly compatible with General Relativity.
But what is the point in discussing anything with a truebeliever or a truedenier?
If your world is exceedingly finite, fine! But that may have nothing to do with the Universe.
Well, I'll always go with finiteness over the alternative. Self delusion is an inherent human characteristic that the idea of an infinite universe plays into. It plays into all the human foibles humans are heir to because there can then be no ultimate constraint on any theory of physics then. Finiteness keeps us all honest because a change in one thing will always result in a change somewhere else. It allows for correlation between data sets which would otherwise be impossible. Without that you are not creating 21century physics, you are just making it up as you go along..
You are entitled to believe whatever you choose to believe in, but I do not think your arguments against an infinite Universe are scientifically valid. Those physicists who have spent a lifetime studying cosmology generally would be not rule out infinite models based on current knowledge.
http://physicsdatabase.com/2012/06/24/hiddeninplainsightthesimplelinkbetweenrelativityandquantummechanics/
What are the odds of that coming out precisely in May 2012...
P.S. It's about the superposition of velocities...
Holy Shit Shawn, (like the aliteration?)it's only 99 cents on my kindle. I guess I'll buy it as I feel I now can't afford not to.
"then it is selfevident that nature is an infinite discrete selfsimilar fractal."
Robert, Nature is a bitch and doesn't give a fuck, especially about fractals...
Hey Eric,
Say hi to Sarah Palin for me. Anyway, just save your cash: it's nothing new. I'm sure you knew that though.
 Shawn
Post a Comment