Saturday, August 08, 2020

Really Big Experiments That Physicists Dream Of

This week, I have something for your intellectual entertainment; I want to tell you about some really big experiments that physicists dream of.

Before I get to the futuristic ideas that physicists have, let me for reference first tell you about the currently biggest experiment in operation, that is the Large Hadron Collider, or LHC for short. Well, actually the LHC is currently on pause for an upgrade, but it is scheduled to be running again in May 2021. The LHC accelerates protons in a circular tunnel that is 27 kilometer long. Accelerating the protons requires powerful magnets that, to function properly, have to be cooled to only a few degrees above absolute zero. With this, the LHC reaches collision energies of about 14 Tera Electron Volt, or TeV.

Unless you are a particle physicist, this unit of energy probably does not tell you much. It helps to know that the collision energy is roughly speaking inversely proportional to the distances you can test. So, with higher collision energies, you can test smaller structures. That’s why particle physicists build bigger colliders. The fourteen TeV that the LHC produces correspond to about ten to the minus nineteen meters. For comparison, the typical size of an atom is ten to the minus ten meters, and a proton roughly has a size of ten to the minus fifteen meters. So, the LHC tests structures a thousand times smaller than the diameter of a proton.

As you may have read in the news recently, CERN announced that particle physicists want a bigger collider. The new machine, called the “Future Circular Collider” is supposed to have a tunnel that’s one-hundred kilometers long and it should ultimately reach one-hundred TeV collision energy, so that’s about six times as much as what the LHC can do. What do they want to do with the bigger collider? That’s a very good question, thanks for asking. They want to measure more precisely some properties of some particles. What is the use given that these particles live some microseconds on the outside? Nothing, really, but it keeps particle physicists employed.

The former Chief Scientific Advisor of the British government, Prof Sir David King, commented on the new collider plans in a BBC interview: “We have to draw a line somewhere, otherwise we end up with a collider that is so large that it goes around the equator. And if it doesn't end there perhaps there will be a request for one that goes to the Moon and back.”

Particle physicists don’t currently have plans for an accelerator around the equator, but some of them proposed we could place a collider with one-thousand-nine-hundred kilometer circumference in the gulf of Mexico. What for? Well, you could reach higher collision energies.

However, even particle physicists agree that a collider the size of the Milky Ways is not realistic. That’s because, as the particle physicist James Beachman explained in an interview with Gizmodo, unfortunately even interstellar space, with a temperature of about 3 degrees above absolute zero, is still too warm for the magnets. This means you’d need a huge amount of Helium to cool the magnets. And where would you get this?

But even a collider around the equator would be a technological challenge. Not only because of the differences in altitude, also because the diameter of Earth pulses with a period of about 21 minutes. That’s one of the fundamental vibrational modes of the Earth and, by the way, more evidence that the earth is not flat. The fundamental vibrational modes get constantly excited through earthquakes. But, as a lot of physicists have noticed, this is a problem which you would not have --- on the moon. The moon has very little seismic activity, and there’s also no life crawling around on it, so, except for the occasional asteroid impact, it’s very quiet there.

Better still, the moon has no atmosphere which can cloud up the view of the night sky. Which is why physicists have long dreamed of putting a radio telescope on the far side of the moon. Such a telescope would be exciting because it could measure signals from the “dark ages” of the early universe. This period has so-far been studied very little due to lack of data.

The dark ages begin after the emission of the cosmic microwave background but before the formation of the first stars, and they could tell us much about both, the behavior of normal matter and that of dark matter.

The dark ages, luckily, were not entirely dark, just very, very dim. That’s because back then the universe was filled mostly by lots of hydrogen atoms. If these bump into each other, they can emit light at a very specific wavelength, 21 cm. This wavelength then stretches with the expansion of the universe and should be measureable to day with radio telescopes. Physicists call this “21 centimeter astronomy” and a few telescopes are already looking out for this signal from the dark ages. But the signal is very weak and hard to measure. Putting a telescope on the moon would certainly help.

This is not the only experiment that physicist would like to put on the moon, if we’d just let them. Just in February this year, for example, a proposal appeared to put a neutrino source on the moon and send a beam of neutrinos from there to earth. This would allow physicists to better study what happens to neutrinos as they travel. This information could be interesting because we know that neutrinos can “oscillate” between different types as they travel – for example an electron-neutrino can oscillate into a muon-neutrino – but there are some oddities in the existing measurements that could mean we are missing something.

And only a few weeks ago, some physicists proposed to put a gravitational wave interferometer on the moon, though this idea was originally proposed already in the 1990s. Again the reason is that the moon is far less noisy than our densely populated and seismically active planet. The downside is, well, there are no people on the moon to actually build the machine.

That’s why I am more excited about another proposal that was put forward some years ago by two physicists from Harvard University. These guys suggested that to better measure gravitational waves, we could leave a trail of atomic clocks behind us on our annual path around the sun. When a gravitational wave passes through the solar system, the time that it takes signals to travel between the atomic clocks and earth slightly changes. The cool thing about it is that this would allow physicists to detect gravitational waves with much longer wavelengths than what is possible with interferometers on earth or on the moon. Gravitational waves with such long wavelengths should be created in the collisions of supermassive black holes and therefore could tell us something about what goes on in galactic cores.

These experiments have in common that they would be great to have, if you are a physicist. They also have in common that they are big. And since they are big, they are expensive, which means chances are slim any of those will ever become reality. Unfortunately, ignoring economic reality is common for physicists. Instead of thinking about ways to make experiments smaller, easier to handle, and cheaper to produce, their vision is to do the same thing again, just bigger. But, well, bigger isn’t always better.


  1. Several issues. First, it is not just that it keeps particle physicists employed; others are interested in the results of pure science as well. Second, of course one has a finite amount of money and has to decide how best to spend it, but my guess is that describing why one's own ideas are better would be more productive than essentially saying that the others are caught in a bigger-is-better trap and can't escape. Third, we don't know if a bigger collider would discover anything new, but at least it will allow more precise measurements of known physics. Historically, this has happened before, with PETRA at DESY. However, the main point of doing such experiments is because we don't know what they will find; if we did, why do them? As Robert Pirsig said, the TV scientist who sighs "Our experiment is a failure; we didn't find what we expected" is suffering mainly from a bad script-writer. Yes, you might see these as pie-in-the-sky toys for boys or whatever, with little if any practical use, but your own research is seen in the same way by well over 99 per cent of the population. (As my late history teacher used to say, just an observation, not a judgement.)

    1. You are missing the point. I am objecting the "let's just make it bigger" philosophy that too many physicists have signed up to. To me it mainly signals a lack of willingness to think. I have addressed the "let's just look" argument many, many times before: You can perfectly well "just look" with experiments that make more financial sense, so it's not an argument for big and expensive experiments in particular.

    2. I mostly agree, but one reason to discuss such ideas is a psychological one. If one is feeling pessimistic about progress in fundamental physics, they let one think "Well, even if we don't make any theoretical breakthroughs, and even if we don't think of any clever small experiments, eventually technological progress will allow us to build these really big experiments and then we will learn something". A sort of floor for the worst case scenario. (Of course, this ignores the possibility of economic/technological progress stopping or even human extinction).

      [Full disclosure: I have also designed a "let's just make it bigger" collider - arXiv:1704.04469. But that wasn't entirely serious...]

      By the way, there is something funny about Beachman's numbers - a Planck energy circular collider the size of Neptune's orbit would need a magnetic field of roughly a million Tesla. We have no idea how to build such strong magnets, and even if it's possible, there is no reason to assume they would need to be cooled to liquid helium temperatures. But that's a detail.

    3. Phillip Helbig5:22 AM, August 08, 2020

      " However, the main point of doing such experiments is because we don't know what they will find; if we did, why do them?"

      Jon Butterworth wrote in one of his "Guardian" articles:
      " The LHC can probe structures about a hundred-million times smaller than an atom. The scale at which problems with quantum gravity definitely become unavoidable is about 10^17 ... smaller still. "

      If you want to take these odds, you pay for it. Some of my taxes have already been spent on your education and research, and yet you don't even understand modus ponens after several decades.

      The entitlement is staggering. You are completely incompetent in your field, and yet you want more money. You are not even honest.

      When are you going to withdraw your unjustified claim that there is empirical evidence that the universe is fine-tuned, and when are you going to update your review of "A Fortunate Universe" and point out it's a pack of lies, schoolboy errors and mental delusions?

    4. Phillip Helbig5:22 AM, August 08, 2020

      "Yes, you might see these as pie-in-the-sky toys for boys or whatever, with little if any practical use, but your own research is seen in the same way by well over 99 per cent of the population."

      You seem to be getting increasingly bitter towards Dr. H., but it was you who wasted the opportunity you were given to write an objective review of "Lost in Math" and instead make scientifically unjustified, nonsensical claims about fine-tuning. If you write such nonsense, it is bound to be pointed out to you. So take responsibility for your own mistakes like an adult and quit the pathetic digs.

      It doesn't matter that 99% of the population don't understand Physics, the question is which research and investments are worthwhile and which are not. People like Peter Woit and Dr. H. are right to point out where theoretical physics is going in the wrong direction. Ed Witten is of course a genius - he has been paid very nicely with taxpayers' dollars his whole life for studying a theory that can never be disproved. The new clergy.

    5. This comment has been removed by the author.

    6. "The entitlement is staggering. You are completely incompetent in your field, and yet you want more money. You are not even honest."

      I cannot believe that Sabine Hossenfelder allows such a disgraceful comment by Steven Evans (of no known competence) to stand. The comments on this blog are descending into the muckyard of internet bullying.

    7. ppjm,

      As I have said a few times before, I don't read most of the comments. For the biggest part I just assume commenters behave like reasonable people.


      Could you PLEASE stop throwing around insults like toddlers throw noodles, thank you.

  2. *** "Unfortunately, ignoring economic reality is common for physicists"

    ---> It really looks like that. The flip side of the coin is that ignoring physical reality is common for economists, and even more so (thence the myths of perpetual growth, the "dematerialization of the economy" (sic), peak oil being irresponsibly ignored, similar thing for global warming, and so on).

    *** "Instead of thinking about ways to make experiments smaller, easier to handle, and cheaper to produce, their vision is to do the same thing again, just bigger. But, well, bigger isn’t always better".

    ---> In fact, an interesting and synthetic piece of advice that I've read somewhere (I've not been particularly good at applying it, though) is that BIG IS BAD. Very frequently it is, indeed.

    I too think that the sheer willingness to keep on simply using a bigger sledgehammer (almost literally, for little particles are smashed in the end and as a goal) simply stems from intellectual laziness. Exploring and inventing is difficult. But doing the same again (just larger) is comparatively very easy.

    One correlation that some authors have noted is that liberal and/or democratic societies tend to miniaturize things and to make them more efficient. Dictatorships in turn have a penchant for megalomania, to which inefficiency is usually also coupled: let's think of Soviet structures and technology as an example, and even how that culture pervaded other societies (industrial facilities in the defunct DDR, for instance, where notoriously inefficient, I believe I learnt somewhere).

    [. . .]

  3. [. . .]

    One particular example of "small is beautiful" in action is the culture of Japan. The Japanese are famous for their dexterity and attention to detail. They are literally "a culture of detail" and are very concerned with smallness, subtlety, gracefulness and ineffability. "Only what's invisible is Japanese", said Yukio Mishima. I read a lot about their culture in the past and the spell it cast on me is still there.

    My feeling now is that they're so good at all that because they cultivate stillness of mind and concentration. The goal of education there is not "self-development" (as is the case in the West), but "self-control". And they do not seek Transcendence, but Immanence (what are things essentially and in themselves?).

    Somehow they use their "energy" (psychically speaking) better than we do, and in no way allow vulgar things from the outside to disturb their inside. Deep down they're very "religious" people. The religious person is the one that realizes that everything is bound or connected (that's what "religio" initially meant in Latin: bond, connection, liaison): the trembling of a blade of grass here has an effect even on the most distant star. The Japanese are ok with silence, and even with not thinking; very much like the Koreans also, but quite in contrast at that with the much louder and even noisy Chinese. These are broad generalizations, ok, but there's substance to them.

    This is bringing to mind two interrelated ideas that I learnt in the recent past. One is the concept of VIA NEGATIVA (Latin expression) in the sense of "refraining from doing something", as explained in N. N. Taleb's work "ANTIFRAGILE". The other one (itself arguably a grand exercise of Via Negativa) is the work "HARE BRAIN, TORTOISE MIND: WHY INTELLIGENCE INCREASES WHEN YOU THINK LESS", by Guy Claxton. Surely all these things are related. Claxton's book contains a lot of food for thought regarding the connection between calmness and scientific creativity. A businessman or a bureaucrat is not in the best position to "create" anything; quite the opposite. That colossus of technology called Nikola Tesla also thought that the secret of invention was "living alone".

    Let's look at the USA nowadays by comparison ... They seem to think that "More"/"Bigger" always means "Better". More money spent into whatever. More bombs with more megatons ( = "better" army). Hollywood hogwashes featuring more spectacular explosions and more bizarre special effects plus intended sensorial overload ( = "better" movies); etc. And meanwhile intellectual levels are in free fall with no end in sight. Very very sad, specially because Europe might soon follow their trail. It really feels like there is a conspiracy to blow up the very mechanism of attention/concentration in the West. No wonder that the count of ADHD cases is skyrocketing in those countries, and for very understandable reasons.

  4. First a minor quibble. A fictional collider circling a galaxy would not have a temperature 3K. Interstellar space is somewhere around 20K and the perimeter of a galaxy is probably 10K or so.

    To ask what is beyond the horizon is not just a matter of scientific curiosity but is something inherent in us. I think there should be efforts along these lines, but in one sense just expanding what we know how to do may not be the best approach. If one told an indigenous person that we could communicate vast distances, they might imagine people beating out a message on some enormous drum. It would then seem we could work the science and engineering to miniaturize how to accelerate particles in the 100 to maybe even 1000TeV range of energy, but that is no larger than the LHC.

    The question to my mind is whether elementary particles are light weight relative to the Planck mass-energy or whether there exists a mass-energy ladder of particles beyond that. There might also be sphaleron physics in this domain. To absolutely abandon this seems unfortunate.

    The plan to put atomic clocks in a chaser orbit with the Earth is not too different from the eLISA system. These laser interferometric spacecrafts are on orbits with the same eccentricity and periapsis but different longitude of ascending nodes and inclinations so they form a triangle that rotates behind the Earth. A gravitational wave passing through this region is detected by the interferometer on a geodesic shielded from solar wind and other perturbations by the craft.

    Putting radio telescopes on the moon is not that big a project. A landing vehicle could unfold the telescope and these could be deployed in a small array with optical communication by laser ranging. That is not a terribly difficult thing to arrange. At least I do not see it anywhere near the complexity of Elon Musk’s rather insane idea of sending people to Mars.

    1. LAWRENCE said "(...) Interstellar space is somewhere around 20K (...)"


      That's a lot. I used to think it was something like 2.7 K

      Did you really mean 20 K and not perhaps 2 K?

      If it were 20 K, then it'd be a mere 15 K or so under the temperature on the surface of Pluto (admittedly a cold place, but still). It doesn´t feel plausible.

    2. To reach 2.7K you have to go out inter intergalactic space, and better out in these voids between galaxy clusters so the main contributor of radiation is the CMB. Within a galaxy stars contribute a bit of EM radiation and there is a net irradiance. The figure 20K sticks in my mind. If you were on the perimeter of a galaxy this temperature would be lower.

    3. Pop science and the media like to pretend nature is simple. She's not! As Lawrence pointed out 2.7K is the CMB temperature - deep intergalactic space. Interstellar temperatures (i.e. within Galaxies) vary widely according to the local environment and 20K is not unusual. See for example: The thermal state of molecular clouds in the Galactic Center: evidence for non-photon-driven heating, > astro-ph > arXiv:1211.7142. The authors report a temperature range between 50K and 100K for the central molecular zone of our Galaxy.
      For comparison The New Horizons mission measured Pluto's temperature as about 72K, NASA gives a lowest value of c.33K.

  5. @Quantum Bit,
    Please not essays (I make sometimes this mistake too), you can pass your message with less sentences, otherwise besides it makes the job of Moderator extremely difficult (for nothing actually), none will be able to follow. This results in ignoring your message and breaking the motivation of all others while being ready to drop a message.

  6. I don't think the Internet has helped at all with the incidence of ADHD. It's another symptom of affluenza that the economist Galbraith described so well.

    1. The rise of ADHD and the progressive reduction of the span of attention of the average person may be put down to:

      *** Use and abuse of INFORMATION TECHNOLOGIES (it is said the elite do not allow their children to use their cell phones for more than 30 minutes a day).

      *** TOO MANY INPUTS or stimuli (sensory overload), which partly stems from what's stated in the previous point: advertisements also play a role. Our culture is ever more imbalanced towards VISUAL stimuli; at least more than was the case in the past.

      *** The culture of PROGRAMMED OBSOLESCENCE and ephemeral consumer articles: everything is designed for rapid usage and subsequent disposal, because this way the banal wheel of consumerism keeps on turning.

      *** Some weird PARADIGMS that are being oficially enforced in the educative system: ideas such as "effort", "concentration", "repetition [drilling]" and others are increasingly becoming a taboo as per the newest, supposedly 'innovative' pedagogical methods; at least in the West. Asians are still aware of the power of controlled Attention.

      *** Pesticides and other CHEMICALS in the food and in the stuff around us: bromine compounds for instance continually give off this nasty gas (Br2); it's in many plastics and in the colourant additives of diverse commonplace objects. The effects in the middle/long run are endocrine disruption, cancer and (likely) ADHD-related pathologies. Beware of cheap stridently coloured stuff (typically clothing but not only, made in China, India, ...) heaped up in small and badly ventilated rooms in which people may even sleep, unaware of this danger.


      Furthermore, ADHD may be currently being overdiagnosed owing to a variety of reasons: it's like a wildcard with which to justify many things in certain contexts.

    2. Excuse me, but this is a forum about physics. What are, ahem, marginal ideas about ADHD doing here?

    3. Without [concentrated] ATTENTION (not a specially valued commodity in the West these days), there's no possible way how one can achieve new insights or make discoveries. At least for me (at for many cultures in Asia; EuroAmerica is a different place) it's more valuable than gold.

      Remember Newton's stated "method" for doing what he did, supposedly "by keeping the ideas/problems in his mind all the time", etc.

      A big part of the current predicament of the so called Big Science is intellectual laziness creeping in all the time while little is done to bail it out.

      Gigantic megalomaniac structures may be necessary, granted. But they may also be a way to "advance" without really taking the pains to explore new venues. The BIG THING route may easily beget groupthink and self-serving "political" institutions that by their very nature cannot be in the best interest of science.

      And soon if not already, when Artificial Intelligence and in particular Deep Learning are applied to crunch huge masses of data, we may end up with a host of mathematical models that fit said data very well, but lack "meaning" or a verbal interpretation that our chimpanzee brains might understand. Of course, it's not preordained that we should be able to understand an ultimate theory or model of Everything. The opposite situation seems far more likely.

      With the astronomical measurements available in the 16th century (Tycho Brahe's) and a humble Excel spreadsheet, it would be a breeze to produce Kepler's laws, but not necessarily (or at all) the conceptual* framework of newtonian mechanics.

  7. It takes a giant sledge-hammer to break up some very small and hard nuts! I'm guessing that the energy required to breaking them is due to the binding energy of said particles. Is that right?

    I think LIGO is a successful small physics experiment. I also think it's going to auger a new age in gravitational astronomy which we've only seen the early hints of.

    I also like the Japanese built solar sail powered spacecraft that sailed to Venus in about six months a few years ago. Another one I like is the work establishing that bumble-bees can fly because standard aerodynamics proves they can't (I learnt this from a Doctor Who novelisation when I was a teenager). Apparently it's because Bumble bees generate vortices on the downstroke of their wings that gives them three times the lift that aerodynamic textbooks say they should have.

  8. Are there any thoughts about putting an gravity wave interferometer in space? I would think space (somewhere out there) would be even quieter than the moon and, perhaps, easier for construction.

    1. And there is even a Wikipedia page for "fine-tuned universe".

      "If the values of any of certain free parameters in contemporary physical theories had differed only slightly from those observed, the evolution of the Universe would have proceeded very differently "

      Do you see the "If" there? Can you pronounce it? Do you understand what "if" means?

    2. Phillip Helbig3:46 PM, August 14, 2020

      I'm OCD because I care about evidence for claims? Imagine if Martin Rees cared about evidence, he would have saved himself 40 years wittering on about fine-tuning the multiverse; and Ed Witten would have saved himself 30 years wittering on about strings; Geraint Lewis and Brian Schmidt would have saved their reputations; and you wouldn't be wasting your time writing a paper with an evidence-free claim.

      All comes down to evidence.

    3. Watch a three-day LISA symposium (space-based gravitational-wave interferometer) online for free:

    4. Phillip Helbig11:01 AM, September 01, 2020

      We can move Physics knowledge forwards via new discoveries or by education.

      Do you now admit there is *zero* evidence for universal fine-tuning and that it is therefore pure speculation? (Since you can't provide any supporting empirical evidence.)

      Let's do this. Let's improve Physics knowledge in the world by 1 more person understanding that Physical theories have to be supported by empirical evidence to be considered true. You can't just make nonsense up. Do you get this, yet?

  9. As an uneducated non-scientist, all I really know about the LHC is that it's given us the mass of the Higgs boson. The existence of the particle itself was already confidently predicted by theory, so what we got for the $15 billion was basically the number 125-point-something.

    But is this the whole story, or just a good sound bite to use when ridiculing Big Science? What has the LHC really delivered in terms of contributions to physics? What's been learned from it, besides that number?

  10. In the current paradigm it seems that scientist are trying to work out what is. A theory, or better word, model, is a simplification of reality that is useful for predicting things. I think it is time to leave "what is" now and realize that reality can be described in useful terms from several different perspectives. My prediction is that as matter and energy was unified, information will be the next step. It's all just information being processed. And as such we will never reach the processing of bits and bytes, because we exist only in the information realm.

  11. In a previous comment I mentioned the book "ANTIFRAGILE", written by Dr. Nassim Nicholas Taleb. I read it very thoroughly in October of 2016. That work contains lots of ideas that are pertinent regarding the issue at hand.

    One of them is the VIA NEGATIVA concept on which I elaborated to some extent above.

    Another pertinent idea is the so-called AGENCY PROBLEM. It arises when somebody is paid to produce something that benefits you (or perhaps society as a whole) while simultaneously the benefit of the Agent itself (thence the phrase "agency problem") is the fact of being paid, irrespective* of whether the job was well executed or not.

    This explains the existence of people who try to sell us junk all the time; the successful sub-species that we might label as self-serving politicians (no good to society, but they have a job and the associated high-wages); the existence and advertising of medical procedures that cost a fortune but are of no benefit to the patient and can even harm him/her; etc.

    This AGENCY PROBLEM might* be at play regarding this matter of ever larger particle accelerators.

    According to N. N. Taleb, the Romans for instance, pragmatic people as they were, had an arsenal of rules of thumb with which to curb the effects of the Agency Problem (a universal ailment that has befallen all places and eras).

    Imagine for example a Roman architect or civil engineer in charge of building a bridge. She might build a shoddy bridge that will collapse in two or three years, but by then the builder will be far away and/or unaccountable. The Roman solution was that the engineer or architect had to live under that bridge with her family for a period of time, after completion of the project. This would ensure real care upon carrying out the project to start with, instead of just going through the motions merely with a monetary reward in mind.

  12. A book relevant to the issue at hand:


    This seems to be a pithy work about the perceived pitiful predicament of modern Physics.

    Some of the REVIEWS (of which an excerpt below) beg reading and re-reading:

    REVIEW #1: [authored by sbdy who uses as a nick "Renegade Physicist", and EXCERPTED and cut shorter here]

    Super-Egos and Modern Theology

    This is the book that has needed to be written for many years, as modern physics has gone off the rails since 1960 with its dangerous amalgam of mathematical fantasy and experimental gigantism. Ever since the physicists made their Mephistophelean deal with the US military, they have conned the public with their unending list of weird particles; eventually, in July 2012 CERN announced their ultimate sleight of hand – the magical Higgs particle! Unsurprisingly, the Nobel Committee awarded their prize for physics in the following year to bless this hugely expensive ($9 billion) enterprise.

    Unzicker is a science journalist that has been tracking these ‘atom-smashing’ projects for several years and is almost unique in challenging the researchers both directly and in print. As a result, he is now able to share his deep criticisms with the general-public, some of whom have been long captivated by this modern saga. He does a magnificent, ‘no-holds’ hatchet job on this merry band of 10,000 bandits led by famous academics with super-sized egos (...)all well rewarded with Nobel prizes and huge salaries) while promoting abstractions that can best be understood as modern theology. The whole process illustrates consensus construction (‘group-think’) through sociological pressures. This project is one of today’s major secret scandals.

    Far too many careers in physics, over the last 50 years, have been constructed around these endeavours, so that few professional physicists dare join Unzicker and risk their own careers by publicly criticizing what has become the orthodox mainstream of academic physics. As one who decided many years ago that modern physics had just become a “math game” and resigned from professional physics to pursue real world opportunities, I have no hesitation in adding my informed support to Unzicker’s attacks on “baloney”.

    I share Unzicker’s respect for the ‘giants’ of quantum mechanics (Dirac, Einstein, Bohr, Pauli, de Broglie, etc.) who moved our investigations down to the atomic level in the 1920s and 1930s. This reductionist program went off the rails when the search for smaller and smaller components of matter was pushed below the nuclear level while many unsolved problems still remained at the atomic level. In fact, the whole research program needs to be reversed and science needed to start investigating the synthetic challenge of how larger and larger aggregations of matter arise in nature (...) Indeed, contrary to modern mythology, quantum mechanics itself is still riven with massive problems of its own interpretation (such as ‘waves’ and ‘spin’); when the truth is that all we have are “mathematical recipes” for calculating simple results in trivial situations.

    [. . .]

    1. So, the Higgs doesn't exist? Buddy, you can believe anything you want, but you're not going to convince anybody of that with a non-technical pile of verbiage like this.

    2. Unzicker is a manifestation of this digital hypermanic age we are in. Oh and yeah, he is wrong.

      Peter Higgs advanced the idea of the Higgs field as a way of curing a problem in QFT. A particle with a mass has a dispersion, seen in a Green’s function

      G(k,k’) = (1/4π)|k^2 - k’^2 - m^2 - iε|^{-1}

      where the mass gives a different propagation for different wavelengths 1/2πk. This results in a divergence at high energy. So Higgs used the Ginzburg-Landau potential theory to show this mass at high energy can be taken up by a scalar field and the gauge field is massless at high energy. This has been highly successful, experimentally verified and a center of modern QFT.

      Unzicker is promoting zombie science, and we see this in anti-big bang stuff, Arp cosmology, electric theory of the universe, and the zombies get stinkier, more rotten and leads into flat Earth. I think one problem is that with every major global upheaval we see in the wake of a technological change in communications and media. The early 20th century saw the telephone, radio and cinema and before the American and French Revolutions were changes in printing technology for pulp publishing and newspapers. Goebbels framed Nazi ideology around the use of radio and film, and we seem to be seeing the rise of similar alt-reality ideas and extremist politics rising in much the same way with internet and computing that has CGI abilities. In this time, we have this phenomenon of zombie narratives, either conspiracy stories or pseudo-science, where in line with zombies you can shoot them, but they are hard to kill, and even if you do others come forth. It is not hard to shoot ducks in a carnival shooting gallery, but they pop right back up. It appears that with big changes in media technology humans have a harder grasp on what is demonstrated, factual, and truthful in contrast to what is fictional and rubbish.

      Sorry to get on my soapbox on this again, yet with the Covid this has come to play out in a sickening way, where in my country we have totally screwed this up. Further, this really is just a test model for issues with global energy-entropy issues such as climate. We may deny ourselves into oblivion.

    3. Peter Higgs is actually on record for saying that the way he did research back then simply wouldn't be possible in today's climate of publish or perish.

      I also think global warming has a great deal more to with neoliberal politics and market fundamentalism than technology. Its one of the biggest market failures of all time.

    4. Lawrence,

      Re “It appears that with big changes in media technology humans have a harder grasp on what is demonstrated, factual, and truthful in contrast to what is fictional and rubbish”:

      Yes, and the most untruthful, “fictional and rubbish” idea going round is the demonstrably false idea that computers/ AIs could be conscious. A lot of science people, who should know better, are propagating fake news about computers. Perhaps their minds have been too influenced by science fiction.

      So, its not as “them and us” as you might like to think.

    5. I do not see the idea AI can be made conscious as something driving society into this crescendo of group-thought mania. I am agnostic on the prospect for conscious AI. I see this issue as more of a question that at this time is unanswered.

    6. Lawrence,

      The point I’m making is that you are essentially no different to these people in society you accuse of “group-thought mania”:

      1. These people should be able to understand climate change or coronavirus issues, but apparently they can’t. Similarly, you should be able to figure out why computers/ AIs can never be conscious (the reason being that symbols of information are not identical to conscious information), but apparently you can't.

      2. These people might consider that climate change is a “question that at this time is unanswered”, just like you consider that consciousness in computers/ AIs is a “question that at this time is unanswered”.

      3. You accuse these people of “group-thought mania”, but the demonstrably false idea that computers/ AIs could be conscious seems to be an example of “group-thought mania”, particularly in some groups of men.

    7. I disagree. Whether or not CO_2 production is resulting in climate heating is decided. The effect is real. Whether AI can be made into a conscious entity is an open question, even though you strongly think it cannot.

    8. Lawrence,

      Obviously, our CO2 production is resulting in climate heating. But the Chinese Room argument is 100% watertight. There is no “open question” about AI, there are only quite a lot of stupid people who don’t understand what symbols are.

      You are no different to those people in society you accuse of “group-thought mania”. You don’t understand the Chinese Room argument about computers/ AI, just like they don’t understand the arguments when it comes to climate or COVID-19.

    9. Lorraine,

      You are not doing yourself a favor when you accuse other people of not understanding things they clearly understand better than you do.

    10. Daniel Dennett gave an argument to refutes Searle’s Chinese Room argument. If I recall Dennett argues that if Searle were correct, then people and animals would all be zombies. The Chinese room is just a model for any sort of system, whether it be electrons in silicon crystals or action potentials on neural axions and dendrites.

      I would argue that it is unclear what is meant by attempting to simulate a mind. In fact, I find the Chinese Room suffers from a problem that computers in general suffer from. The input-output of computers is extremely limited compare to that of a brain. One’s fingertips have 100,000 tactile neurons, where a computer similarly has keyboards, monitors and mice/ The standard PC level machine only has one of these. A brain and animal possessing such a brain is a highly open system, and most AI systems are much more closed. Can this be simulated? A standard computer has a single cache system of operations, and while with appropriate stack operations multi-tasking is possible, this is not the same as a brain performing multiple functions. Then there are parallel computations, but that still requires a permuter that brings it back to the von Neumann system. Can neural networks then fit the bill? It is all unclear. There is no straight forward definition of what it means to simulate a mind in any system.

      The AI argument has in its favor the argument there is nothing particularly special about one form of matter of another. Organic carbon-based matter has nothing more “mindful” or “psy” than any other forms of matter. Now organic molecules have a vast range of complex forms, which might make some difference. However, how complexity gives rise to mind is unclear. Yet obviously biological systems manage to generate this thing called consciousness. There is nothing magical about organic matter.

      We are faced with a big open question. A part of this is we do not have an objective understanding of what consciousness is. Our main understanding is subjective. We have a lot of neurophysiological data on correlations subjects report with real-time PET scans of brains. As such physiologists have lots of data on what is correlated with consciousness, but we do not have a clear definition of just what consciousness is. Without a clear definition it the further makes limited sense to even talk about simulating consciousness, or the inability to do so.


    11. Sabine,

      When it comes to symbols and symbolic representation, and how computers work, Lawrence clearly doesn’t understand better than I do.

      Computers/AIs can never be conscious because they process “binary digits”, a human devised system of symbols which only mean something from the point of view of human beings. Voltage is a category of physical information which is lawfully related to other categories of physical information. But with binary digits, the circuit designers decide that a certain range of voltage numbers will mean 0 or false, and another range of voltage numbers will mean 1 or true. I.e. binary digits are not a proper category of physical information which is lawfully related to other categories of physical information. From the point of view of the universe, as opposed to the point of view of human beings, binary digits don’t exist, so they can never be a basis for consciousness.

    12. I first saw Searle's Chinese Room Argument many years ago, as conveyed by Dr. Penrose in his book "The Emperor's New Mind". I had great respect for Dr. Penrose but his book failed to convince me of its thesis, as did the SCRA. I reasoned then that the individual brain cells of a chinese speaker do not understand Chinese either, and Searle was playing that role (as a part of a whole apartatus) in the SCRA. Whether or not he understood Chinese didn't matter as to whether the whole aparatus understood Chinese.

      Refreshing my understanding of the SCRA in Wikipedia I see that Searle tried to combat that argument by stating that he was not a part of the aparatus but in effect the whole, doing syntaxial operations whose results he did not understand. (Meanwhile he has conceded the brain is a biological machine so it is possible for machines to think, but not by purely syntaxial means and there must be some other operations brain cells are performing.)

      It seems to me that operation is neural networks, which are not syntaxial in the sense Dr. Searle used. He was referring to recipes (such as an applications programmer writes to perform a simple task), sets of instructions which he could carry out in his head or with paper and pencil and a calculator. There is one and only one way it is practical for a single human to perform the operations of a neural network large enough and trained enough to decipher Chinese and respond to general questions based on life experiences in that language: by learning Chinese. (which will be accomplished by training his own brain neurons.) So by insisting on his definition of syntaxial instructions he was assuming what he needed to prove.

      By the way, Dr. Mitchell's book "Artificial Intelligence" starts with a basic review of neurons and states that they are connected to and receive chemical and electrical signals from other neurons, and if the inputs reach a certain threshold a neuron "fires", sending a signal to its connections. Below that threshold it does not fire. According to her is it a binary function (fired or not fired, true or false) which can be and is represented in digital computers as 1 or 0. So you have the equivalent of binary digits reading 1 or 0 in your head all the time, as do all Chinese speakers and other language speakers.

    13. Lawrence,

      I consider that Daniel Dennett, and maybe a lot of other philosophers, doesn’t use logic: he uses spaghetti-logic. I think it’s a big mess.

      The Chinese room is most definitely not “a model for any sort of system”. The Chinese room argument is about symbol use. If one scribbles a letter of the alphabet on a piece of paper using a biro, the paper and the biro ink are materials that are subject to laws of nature, but the scribbled symbol is merely a shape or pattern created by human beings which is not subject to laws of nature. Symbols are shapes and patterns, and other such things like binary digits, which are not subject to laws of nature because they are not reducible to proper categories of information such as you would find in the laws of nature. So voltage is a category of information, but individual, or groups of, binary digits are not.

      So, the reason computers/AIs can never be conscious has nothing to do with consciousness per se. Symbols like individual binary digits, groups of binary digits, and written and spoken words, are not “known to the universe” (so to speak): they are only known to human beings.

    14. @JimV: Your statement about the Chinese person unable to even understand Chinese, because at the end their neurons are doing the same thing is similar to Dennett’s argument.

      @Lorraine: Read Jim above. That the Chinese Room involves the use of symbols is not that different from what the brain does. It in effect uses action potentials to parse them into various symbols. Our perception of colors are in a sense symbols. A green wavelength photon at 520nm has nothing particularly green about it. Yet certain cone cells in the retina have rhodopsin that change their shape upon interacting with a photon of that wavelength, it changes its shape which opens a channel gate which in turn sets off an action potential. That is analogous to a binary signal or bit that is processed through the thalmus and then an ocular column in the occipital lobe of the brain. In the end there are symbol-like processes and events going on, which we perceive as a signal for green.

    15. The problem with the Chinese Room is that it begs the question. It's not an argument at all.

    16. "Symbols like individual binary digits, groups of binary digits, and written and spoken words, are not “known to the universe” (so to speak): they are only known to human beings."-Lorraine Ford

      Please ask yourself how such symbols are "known to human beings" but not to feral children (see Wikipedia). My hypothesis is that the neural networks in human brains found correlations between recognized patterns, starting about 200,000 years ago when homo sapiens emerged (possibly inheriting some of of these symbols from semi-human precursors) and gradually developed and added to them over time, culminating in the wheel-and-axle concept about 6000 years ago, and proceeding rapidly from there. (Most of our technolgy, including computer hard drives, makes some use of that concept.)

      Under that hypothesis, which does not contradict any known facts of which I am aware, vast neural networks in digital computers could inherit such symbols from us and go on to develop their own, as children inherit them from their parents and schooling. Or they could be kept ignorant and restricted to step-by-step tasks which do not allow them to learn things for themselves, as neural networks can. (I do not make any moral judgement about those alternatives; or even about those who deny computers' potential, as the potential of human slaves was once denied. I don't think I have the standing to do so, nor the inclination. As Dr. Searle would acknowledge, we are all biological machines, with fallible biological machines for brains.)


      You wrote

      "(...) But the Chinese Room argument is 100% watertight. There is no “open question” about AI, there are only quite a lot of stupid people who don’t understand what symbols are (...)"

      Which left me slack-jawed in awe. We must have different standards of logical rigor, or perhaps different intuitions about what is right and what isn´t.

      I read a debate about the Chinese-room argument many years ago. It was published in the local edition of Scientific American corresponding to March of 1990. On the one hand there was this philosopher John Searle, whose article had the title "Is the mind a computer programme?". In it Searle supposedly proved that computers will never be sentient / possess mental states.

      And then there was another article. Its authors were Paul&Patricia Churchland, I think I remember. Some years down the line I would learn about Artificial Neural Networks for the first time thanks to a beautiful, wonderful book of theirs: "The Nature of Mind and the Structure of Science: A Neurocomputational Perspective". Even the cover picture is a boon, and very inspirational.

    18. My view is that Searle's argument is fallacious in many different ways. Perhaps even specious; certainly equivocating: even back then -I was a teen-, to me Searle's whole line of reasoning came through as very pedantic and simple-minded. I felt that in his dubious exposé he was substituting lofty language for real, genuine understanding of the considered phenomenon and its subtleties, which he couldn´t grasp at all.

      But then the Churchlands in their article gave an appropriate response and meticulously dotted the I's and crossed the T's for Searle (probably to no avail anyway).

      I can't reproduce the whole article (Churchland's), much less from memory, but I remember they mimicked the Chinese-room argument in order to "prove", à la Searle, that light is not an electromagnetic wave. They did a perfect job of imitating what in his books Richard Feynman at some moment scornfully calls "salon philosophers" or "coffehouse philosophers" ...

    19. Imagine (Churchland's argument) that you are in a dark room and are holding a magnet in your hand. Then you suddenly start shaking the magnet back and forth, as frenzily and violently as you can. While you exert yourself doing this, you are clearly aware that the room keeps on looking pitch-black. You can see nothing and hear nothing, except perhaps your own eventual panting in this situation while you shake the magnet in the air.

      Therefore and once this experiment is over we can philosophically conclude that the mere motion of magnets, light does not produce. Or maybe better yet, in more high-key, philosophical terms:

      "The mechanical phenomenon of a Magnet's oscillatory motion is bereft of luminal powers"

      Then and for extra philosophical effect add as a scholium that dark rooms (Chinese or otherwise) in which magnets wiggle keep on being ... dark.

    20. OF COURSE, we know that the problem is just that you cannot physically wiggle (shake, accelerate back and forth) the magnet fast enough.

      IF (imaginarily) you could manually wiggle the magnet around 400 (American) trillion times per second elapsed, i.e. with a frequency of circa 4*10^14 Hz, at that moment your eyes would see common light emanating from where the magnet is moving. At first a dim light, but then ever more intense as the frequency of the wiggling rises.

      By the end of the article the Churchlands said that the Chinese-room argument proves nothing: it might as well be used to prove that the brain of a native speaker of English does not understand English. The truth is that "one single neuron of our brain does not understand it, but the whole brain does".

      We might fashion another similar experiment with stones ... If electrostatic effects is ruled out, then two peebles do not seem to attract each other, thence the whole idea of gravitation must be amiss -> same fallacy as before.

    21. Your symbol-related objections are feeble or even irrelevant. Imagine a smallish drone equipped with pattern-recognition technology and that is able to spot mosquitos on the fly and zap them with an inbuilt laser.

      We might elaborate at length about the software "animating" the device, how it was specifically coded and in which computer language, what was the syntax and which were the commands of that programming language, etc.

      But then there is at least another viewpoint: somebody catches the drone and pieces it apart ("dissects" it, in more biological parlance). Upon examining the drone's innards this sneaker lists all the different physical parts of the drone, figures out how they work and which physical magnitudes are at play (voltages, currents, perhaps temperatures and certainly impinging photons, etc.). Eventually, this new agent reverse-engineers the drone and may build new copies/clones, or even IMPROVE on the original ...

      This could be done without manipulating symbols at all, much the way how Nature copies genomes, executes genetic transcription, etc. without having to "think" ATCG GCTA etc.

      The whole symbol thing is spurious. Just a psychological device, tools for our weak intelligences, mental hobblesticks for us humans to cut through the maze of possibilites because otherwise we would not be able to "handle" the design details. IN PRINCIPLE it could be done, but it would be akin to a team of people having to build a car WITHOUT USING LANGUAGE.

      I think in central Europe they still differentiate between Geistwissenschaft and Naturwissenschaft. But this type of nuancing seems to be restricted to the ways of thought of people in the German cultural orbit, and will be very likely lost on people from the Americas or even nearby countries (to Germany), in spite of its possible relevance.

    22. Whatever the case, the symbol objection in the Chinese-room argument is a mirage. A tricky device. A fallacy. It proves nothing. It's us who see the symbols (if at all, depending on our culture and background). Nature or the Universe don't care about symbolism. Both in a brain and in a computer you have "voltages" (essentially Elektronendruck, or electronic pressure -> we might as well put it down to repulsion forces between individual electrons) changing rapidly, and currents flowing here and there. So what? Of course there are many differences between a computer and a human brain. Mainly and perhaps decisively very complex chemistry going on. As to how that produces mental states, it's anyone's guess at the moment.

      True that in our synapses, more than electrical currents, the signals are of an electrochemical nature: the "spikes" are waves of ion concentration gradients dashing hither and thither at a few tens of metres per second at most. And those spikes are not neat (all or nothing) as in ANN. They have a time history or profile, they're slightly different every time, and things are not as clear-cut (and simple) as in ANN models. There are also voltages (herds of electrons or ions collectively exerting electronic pressure/voltage/Elektronendruck), of course.

      But then, how do you transition from the basic essential ingredients (ions, voltages or perhaps repulsion forces over a distance or ... , concentrations, this and that molecule floating in the cerebral milieu, etc.) to the SUBJECTIVE STATES that only you know for sure that you are experiencing???

      With his methodology, Searle could as well conclude that not even his own mind exists: it can't have sentience because the building blocks are bla bla bla. That wouldn't even be a solipsism. Perhaps we'd necessitate to coin the word nullipsism for the philosophical world once Searle effectively nullifies himself out of existence by means of his own argument, applied to his own central nervous system, both brain and brainstem.

      The problem that Searle and other people with his frame of mind have is a very limited imagination and even less "empathy" towards things natural. Specifically, they command a very narrow definition of what a "machine" or a "computer" is and how it can be physically embodied.

      The great JOHN(Janos) VON NEUMANN may have presciently summed up this issue when he said:

      "You insist that there is something a machine cannot do. If you tell me precisely what it is a machine cannot do, then I can always make a machine which will do just that".

    23. Lawrence,

      As I suspected, you haven’t got the faintest clue how computers work. In particular: 1) you are not aware that binary digits are not the same as voltages i.e. you don’t know what binary digits are; and 2) you don’t know what symbols are.

      Apparently you feel you can tell me all about “what the brain does”. And apparently, the brain uses “symbols” (not that you actually know what symbols are); and apparently something “analogous to a binary signal or bit … is processed through the thalmus…” (not that you actually know what binary digits are).

      Your analysis of the situation is just not good enough.

      (By the way, if you want to know how computers work, don’t ask a philosopher like Dennett.)

    24. Quantum Bit,
      It would be nice if your sheer volume of verbiage actually had some content of substance.

    25. Flip flops are a pair of op-amps, which determine on or off with voltages and currents.

      Anyway, I think you are too certain about your stance here. I am not saying AI can or can't be conscious. I just do not think we know enough to make a judgment on this.

    26. P.S.
      See also “How is binary converted to electrical signals?” at

    27. Please avoid posting links. I didn't approve your other comment because I don't recognize the website's name and have no time to check it. Also, please stop insulting other commenters.

    28. Lorraine,

      I find your fixation with symbols and bits puerile:

      YOU SAY: "(...) Computers/AIs can never be conscious because they process “binary digits”, a human devised system of symbols which only mean something from the point of view of human beings. Voltage is a category of physical information (...)which is lawfully related to other categories of physical information. But with binary digits, the circuit designers decide that a certain range of voltage numbers will mean (...) false, and another (...) true (...) binary digits are not a proper category of physical information (...). From the point of view of the universe, as opposed to the point of view of human beings, binary digits don’t exist, so they can never be a basis for consciousness (...)"

      BUT ...

      a) Information is an abstract concept and can be embodied in many different ways.

      And crucially it seems you don't see that:

      b) Upon engineering something we use ideas such as information, and in fact mathematics* in general, as a type of mental scaffolding that facilitates things and creates clarity (in our minds), but that can be dispensed with altogether once the project has been finished. EVERYTHING obeys the laws of Nature, by definition, and there's no way out of that.

      We do not know what consciousness is, how it arises or whether it can be measured somehow. For what it be worth, my view (a conjecture) is that sentience, consciousness or the Qualia (for me it's the same thing) needs a special material embodiment or substrate in order to appear. I feel brain chemistry is essential, but I do not know why or how.

      Even if you could set up a tinkertoy simulation of a superbrain, more complex than our brains and spanning the whole Universe, I don't think it would still be conscious. Meanwhile, dogs, cats and mice have a 6-layered cortex (exactly as we do); typically 2 mm thick, but with a smallish surface for mice, somewhat larger for cats and dogs, and close to that of 4 extended table napkins for humans. That's the reason it looks crumpled up with grooves and circumvolutions: it must fit within the limited space inside the skull.

    29. Before the arrival of Christianism in Europe people, or pagans in general, were more prone to practicing Gnosis: a direct apprehension or knowledge of things. We've lost that.

      The word "hormone" is a hellenism which meant something like "starting to move" or also "wake up", which bides well with the idea of chemistry somehow creating consciousness.

      Another idea is that in many tongues which belong to even different language families, the word for "soul/spirit/consciousness/..." and the word for "air" is the same, or closely related:

      In Latin "spiritus" meant air or gas at the start (in fact, 'gas' is a relatively new word, forged via Flemmish from Greek 'Chaos'), and only later did it take on the meaning of "spirit" in the modern sense. Consider the series of words inspiration, expiration, conspiration, aspiration, perspiration, etc. (i.e. the spirit goes in, goes out, gets together with, looks upwards, traverses, etc.).

      Russians have "vósdukh" for 'air', but "dush" (second root in vósdukh, plus a consonantic change/smoothing) for spirit/soul.

      In the Semitic languages you have "ruh" or "ruah" (both in Arab and Hebrew, I think).

      In Chinese you have "qi" (pronounced chee) both as the key root for air/gas, and also for "soul/spirit". Same thing with the Japanese "ki", although it's almost surely a borrowed and nipponized Chinese root the same way how English borrowed lots of French words in the past (Japan is the England of Asia).

      Same for "Pneuma" in Greek or "Atman" in Sanskrit, the latter frequent in yoga vocabulary and meaning "spirit" (it's related to the German verb 'atmen" and to the Greek atm- of 'atmosphere').

      Consider the English expression "God bless you", wherein the verb is related to 'blasen' in German.


      What to make of all this? Is it a general anthropological phenomenon similar to the phlogiston theories of the past or is there something different going on, perhaps related to direct apprehension via Gnosis?

    30. Quantum Bit,

      Symbols are the special things that were developed by human beings many thousands of years ago. These physical symbols meant something to human beings: drawings of animals, special markings on clay tablets, and special grunts and sounds. Today we have (e.g.) written and spoken languages, mathematical symbols, and “binary digits”. Just like drawings of animals, physical symbols represent something else to the human mind than what the physical symbol itself actually is: this was a big-time invention. These physical symbols are special things used by human beings, especially for communication: there are no genuine “mental symbols”, as I will try to explain.

      So what exists? I would say that there is: 1) matter/mind (particles, atoms, molecules, living things); and 2) (what we would symbolically represent as) logical and lawful relationships (= categories of information like mass) and numbers. Obviously, piles of sand, pieces of paper, and biro ink are matter that is subject to these lawful relationships, but without the integrated structure necessary for a mind.

      But there is another thing that doesn’t genuinely exist in the sense of 1 and 2 above, it is a thing that is merely perceived to exist: man-made physical symbols. Man-made physical symbols are shapes and patterns (like letters and numbers, and patterns in sound waves), where the medium (paper, biro ink, sound waves) is a physical thing that is subject to lawful relationships, but the shape and pattern is an aspect of a lawful physical outcome that is NOT subject to lawful relationships. Unlike all other aspects of physical matter, there are no necessary law of nature-lawful consequences for “shape and pattern”: “shape and pattern” is something that is (sometimes) discerned by the human mind, but never by “the universe”.

      Do “mental symbols” exist? No, because the matter/mind (particles, molecules, cells) within the brains of living things exists in lawful and logical relationships. Genuine physical symbols are “shapes and patterns” external to the living thing that don’t exist in lawful and logical relationships.

    31. Lorraine Ford: Of course "mental symbols" exist, the only way you recognize a shape as special is by a physically specific organization of neurons. Displace those same neurons, so they are not close enough to influence each other, and your ability ends.

      The only thing it means to "recognize a shape" is for that collection of neurons, once it has internally recognized the shape, to externally signal other collections of neurons, in parallel, to indicate that "symbol" is present, and the associated generalized properties of it are therefore present, and thus anything related to those specific properties is more likely to be present as well.

      There is nothing special about the presence of the letter "e" that does not also apply the presence of "a button", "a wheel", a "handle", "a face".

      All of them are processed in the same general way, and it is the recognition of thousands of such symbols, simultaneously and processed in parallel by the brain, that settles it into a "most likely" explanation of everything it is sensing, and thus what it can expect to happen next, and in general the sense of "presence" in a place.

      What do you even mean by "lawful" relationships? is that just your invented qualifier to exclude the kinds of relationships you wish to dismiss, because they don't fit your theory? That is just another form of cherry-picking your evidence.

    32. Dr Castaldo,

      A button is a button is a button, a wheel is a wheel is a wheel. But physical symbols represent something else to the human mind than what the physical symbol itself actually is. Naturally enough, the perception of physical symbols is an extension of the perception of ordinary objects.

      But don’t confuse and conflate:

      1. Integrated and law-of-nature lawfully and logically interacting matter within the brain/ body; with

      2A. Written and spoken words (i.e. squiggles and shapes on paper, and patterns in sound waves, which we interpret to mean something) and
      2B. Binary digits in computers (where there is a disconnect between the genuine voltages and the binary digits we interpret them to be).

      I.e. don’t confuse and conflate:

      1. The interpreter; with
      2. The interpreted.

    33. Lorraine, let me try a few more sentences. Not quite so many as Quantum Bit, but perhaps enough to catch hold a little better.

      The way in which I mean that the Chinese Room Argument begs the question rather than arguing anything at all is that if one takes the question to be, "Is consciousness or mind purely functional?" then the Chinese Room presents a (rather poor) particular model of how rather crude internal states might function, and then argues that because those internal states don't feel like mind or consciousness, we should not call that functioning consciousness. But the argument that you and Searle are tasked with is to show that a functional understanding is incomplete. I would only follow you inside the room and start asking question of the operator and their lookup table if I accepted the premise. There's absolutely no *argument* being presented here as to why I should follow you inside the room (let alone an argument as to why a computer of integrated circuits would be necessarily more like a lookup file than a human nervous - but that argument is for later.)

    34. DougOnBlogger,

      What if you walked into the Chinese room and “start[ed] asking question[s] of the operator and their lookup table”? What if you handed the operator a Chinese to English code book? And what if the operator accidentally dropped the code book down the toilet; and what if the operator’s mother forgot to pack his sandwiches for lunch?

      But the Chinese room argument is not about piling on a load of red herrings. I would think that the Chinese room argument is not about consciousness at all: it’s about what information can be known, and who or what knows it.

    35. Could you expand on that? I'm curious how you would phrase what it is trying to argue? My sense is that it is trying to argue about the inadequacy of a functional and behavioral understanding of mind (specifically as argued for in the Turing Test), and it is a bad argument because it is assuming that there is an arrangement of internal states which 'wouldn't count' as 'knowledge' (if you prefer that to mind), and this is precisely what the functionalists do not already grant.

      It feels like when religious apologists try to argue for the existence of a god based on things in the bible. It may be very compelling for you if you imagine a room in such a way, but you haven't even *begun* to make a convincing case for why this room models anything.

    36. DougOnBlogger,

      I’ll copy part of my reply to PhysicistDave on “Do we need a Theory of Everything?” in July, which puts the Chinese Room argument in a slightly different way:

      Searle’s Chinese Room argument is that a computer processing symbols can’t decipher what the symbols represent, just like non-English-speaking Chinese clerks processing English language symbols can’t decipher what the symbols represent.

      So assuming that a computer could identify appropriate sets of its own voltages, the computer next needs to decipher what the voltages represent. Something like: higher voltage, higher voltage, lower voltage, higher voltage… might represent the word “tree” in the English language, where the letters of the word have been re-represented as binary digits (i.e. voltages) according to some man-made convention and code. The computer doesn’t know English; doesn’t know the convention or code; doesn’t know that individual higher (or lower) voltages are part of a code; and doesn’t have any lived experience of trees anyway. Also, a computer is not able to spend time or energy identifying and deciphering sets of symbols: this is not the procedure that the computer was set up to do. In other words, Searle is correct: a computer can’t decipher what the symbols represent.

    37. I don't have the faintest idea what kinds of voltages are happening inside of me that go from reading the text on this webpage in a way that allows me to type out a response that you recognize as valid English. Should I be able to account for this? In order for you to acknowledge that I actually *understand* these symbols, and am not just mimicing the activity of a valid mind, do you require a particular account for what's happening?

      In this telling, "a computer can't decipher what the symbols represent," doesn't even seem like a wrong conclusion - it doesn't seem like a conclusion at all. This sounds like Searle's starting point - he's *begun* with some notion of what it means to understand a symbol. But there are no arguments as to why anyone who did not already agree should get on board.

    38. Are you a philosopher Doug? You seem determined to misunderstand a VERY SIMPLE CONCEPT, and read something into it that simply isn't there.

      To understand the Chinese room, you probably need to know something about the innards of computers, and how they actually work; and you definitely need to know something about what symbols are. The Chinese room argument is about symbols.

    39. My background is in physics and classics, and so (shockingly) my profession is computer software. But I think it's fair to expect a passing familiarity with a functionalist theory of mind if one is going to have adult conversations about such things. Certainly, Searle intended his argument to be a direct response to functionalism.

      It is interesting to look at Searle's paper, and he admits as much himself. 'In "cognitive sciences" one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects.' (I followed a web archive link from wikipedia, the reference is "Minds, Brains and Programs", Behavioral and Brain Sciences, 3 (3): 417–45

      This seems like a real failure to imagine that differing points of view exist. Likewise elsewhere, he can only imagine that someone 'in the grips of an ideology' would sincerely ascribe understanding to the room he described.

      But it's really quite possible! And the entire argument falls completely flat if you're presenting it to anyone with a committed physicalist view of the world and functionalist or behaviorist theory of mind. It's not that I don't understand or like how he talks about voltages in the Chinese Room (he doesn't at all...) it's that I don't like the magic he is positing in the inside of Chinese native speakers.

    40. Doug,

      Yes, well so did I study physics and maths for many years, and my profession for more than 20 years was computer software.

      You (and Lawrence) seem to be unaware that you are using written and spoken physical symbols all the time for communication; and seemingly you haven’t the faintest clue what symbols are anyway. Symbols are patterns and shapes (in light and sound waves), and individual and groups of binary digits (derived from voltages), which represent something else to the human mind than what the physical symbol itself actually is.

      Whereas there are law of nature relationships for mass, charge, and voltage, you might say that symbols are a different class of thing because there are no law of nature connections between symbols. I.e. there are no laws of nature for patterns and shapes, and individual and groups of binary digits. Symbols only exist from the point of view of the human brain/mind (and the brain/minds of other living things seem to use other symbols) because these minds have done some analysis on (e.g.) the light and sound waves.

      Symbols are not lawful or universal: Chinese symbols will usually mean nothing to speakers of other languages, except a meaningless squiggle or shape. Similarly, individual and groups of binary digits can mean nothing to computers because they are man-made symbols and patterns in the voltages; you could only ever claim that voltage might mean something to computers. This is ABSOLUTELTY NOTHING TO DO WITH consciousness per se; there is no “theory of mind” involved: it is to do with the non-lawful nature of symbols.

      The belief that computers/ AIs could be conscious of the information that human beings are conscious of, i.e. the information that the symbols represent, is totally irrational. There is nothing “adult” in the belief that computers/ AIs could be conscious.

    41. That's just like, your opinion, man.

      The Chinese Room Argument does not *show* that at all, it merely asserts it.

      But if you don't think the Chinese Room is about theory of mind, you are getting quite far from Searle, and it's not clear why. If you think the problem is that I don't know what a symbol is (seriously?) then I think this is not about to become more fruitful any time soon.

      If you'd like to try to clearly articulate what starting definitions you'd like me to adopt, what conclusion you're trying to demonstrate, and then explain *why* the Chinese Room actually argues *towards a conclusion,* I'm happy to continue.

      If you require that I *begin* with a notion that "The human mind is able to interpret symbols but computers cannot," then there's really no point in listening to you explain what a voltage is as if this were somehow contributing new information.

    42. Doug,

      I can see that it is pointless discussing the Chinese room parable because it seems that, just like everything else on the internet, everybody has a different opinion about what Searle’s parable is supposed to mean. In any case, the substantive issue is symbols (an example would be Chinese symbols).

      What are symbols and what (if any) effect do they have? The earliest examples of symbols used by human beings developed into written and spoken words, equations and numbers. Symbols are important things: without our use of symbols we would have no science or literature.

      Physically, symbols are special patterns and shapes on paper and screen known to us via patterns in light waves; and patterns in sound waves; and patterns in voltages in computers. But there is no physics of these patterns and shapes.

      So, unlike (e.g.) mass and voltage which are law-of-nature lawful information that always has lawful consequences, these special patterns within light and sound waves and voltages are not lawful information. Patterns don’t even exist except from the point of view of living things that need to perceive patterns within (e.g.) the light and sound waves coming from their environment. Perception of patterns takes up time and energy in the mind/ brain. Computers/ AIs are not spending their time and energy trying to interpret the symbolic patterns that are input: they merely process the symbols as required by the computer program.

  13. [. . .]

    These were the challenges facing the quantum giants when they died but the following generations quickly avoided these deep problems and substituted massive (and expensive) machinery to continue smashing matter into increasingly ephemeral fragments. These never-seen (imaginary) ‘particles’ are simply bundles of imaginary (unobservable) mathematical properties, such as: strangeness, isospin, colour, fractional electric charge, which are all localized to a point (thus particle), so the mathematics of field theory may then be invoked. Unzicker summarizes all this quark quirkiness as “eightfold crap” [p. 104].

    Unzicker loves to contrast this earlier QM search for meaning with today’s invention of these fictitious, short-lived “particles”. These lie at the heart of ‘The Standard Model’ with its hundreds of arbitrary parameters, not least of which are the masses (or even mass ratios) of these so-called ‘particles’ – a key physical concept (‘inertial mass’) at the heart of physics since Newton’s revolutionary theories around 1700. As few realize, the mathematics of field theory cannot explain ANY mass, so why the invention of a new field – the Higgs “boson” should ever have been thought to provide an answer has long been a mystery to me (...) Every student selected to study physics today has to be at the high end of mathematical ability, so that PhD students in theoretical physics are simply applied mathematicians; today’s intuitives, like those earlier giants, such as Einstein and Rutherford with their huge intuitions for nature, are no longer given a chance to research the modern world.

    The extensive use of super-computers for simulations and data analysis mean that few can check these calculations; indeed, experience with large commercial software programs implies that there are probably very many software bugs hidden in these millions of lines of computer-code that remain uncovered for years. Unzicker does a thorough job exposing the great likelihood that almost all these Nobel-earning “discoveries” are probably no more than instrumental artifacts due to selective data filtering based on anticipated underlying assumptions, such as the decay of unmeasurable, electrically neutral (invisible) intermediaries. What is never emphasized is that all we may be seeing in these super, high-energy collisions are ‘harmonics’ of complex interactions: effectively, just “wiggles on wobbles” – not new particles at all, especially as the inelastic “scattering process is not understood” [p.85].

    (...) Just because a few experiments agree to huge precision with measurements does not mean that our theories are on the right track: Ptolemy’s model was vastly better (judged by numerical confirmation) than Copernican models. Unzicker also skewers several of today’s ‘pop’ physicists for their fatuous remarks, such as Brian Cox’s comment that: “The Higgs particle is one of the most important discoveries in the history of science, on equal footing with the electron.” [p. 130]. As Unzicker points out, there have been no new technologies arising from all of this CERN particle research, while the electron transformed the world within 20 years of its discovery.

    (...) Unzicker characterizes CERN as “a Nobel-greedy big science company seeking to get close to politics and big money” [p. 111]. He also points out on the same page that: “Nobody ever got the Nobel prize for proving that something didn’t exist or by showing that someone else was wrong.”

    1. Hi, I read that book. It is just a misery. Most of it sound like author's personal anger and insults.

    2. Hello Quantum Bit,
      I'm sure we'd all be interested to see your list of the standard model's 'hundreds of arbitrary parameters'. Most of us are content with less than twenty.

  14. REVIEW #2 [of THE HIGGS FAKE, by Alexander Unzicker]

    I love this guy

    I love this guy, I want to have his baby. In 35 years of theoretical Physics R&D, 15 of that as director at a government facility, I had one rule: no particle physicists and no QFT (two rules). I read about 10 or more scientific papers a day, haven't read a book in over 20 years. I think I highlighted about 90% of the text, which defeats the purpose. Every other sentence is something I wish I had come out with myself.

    What he doesn't explain is 'how' they fudged the data, which I reviewed. Of the perhaps three dozen major facilities (30,000 total) of 40 years coming up empty handed, they got their pink slips telling them funding would be cut and facilities shut down at year's end (except the LHC). Within 2 weeks, all of the facilities, even those incapable of the energy requirements, produced the Higg's. Rather than have the mass span from 70 to 220 GeV, they flipped the x and y axes to make them all line up tall. If you don't understand, draw a Cartesian x-y axis and draw a line from left to right, then turn it on its side, and they line up from bottom to top. That is what they did. The handed out Nobels, and the funding went on.

  15. There was a proposal to put a radio telescope on a satellite that orbited the moon to make the "Dark Ages" measurement. It was costly, but still could be done for under $500M - not nearly in the same range as the colliders.

  16. During a dinner or coffee with IceCube colleagues about a decade ago, I proposed the idea of using robots to instrument Europa. I never tried to run the numbers though.

  17. Unfortunately, diameter, energies and dollars have approximately linear relationships. The limiting experiment requires the entire resources of the civilisation.

  18. Some of my favorite experiments are searches for rare or forbidden processes done in deep underground experiments, such as searches for neutrinoless double-beta decay, neutrino detectors, searches for dark matter (you never know), or the grand-daddy of hopeless searches, the search for proton decay.

  19. Quantum Bit: That's enough. If you post one more rubbish comment, you go on the straight-to-junk list.

    Everybody else: I hope it's unnecessary, but let me add that the discovery of the Higgs-boson is of course not "fake". It takes an enormous amount of misunderstanding, combined with arrogance, to proclaim particle physics unscientific. You are most welcome to calculate the SM model cross-sections yourself and fit them to the data.

    1. I am really sorry.

      I posted that in the interest of debate (although at some moment I had become vaguely aware that the title of the book had a sensationalist ring to it) because I trusted many of the reviews with which I "checked" the import of the work. I haven't still read that one myself although I want to do it soon, even if only to further spur my interest in these matters.

      Quite lamentably, I am in no position to calculate the Standard Model cross-sections in a way that would fit the data. I would like that more than anything in this world, but evidently such a monumental achievement is not within my power.

      I have never believed the Higgs boson itself is "fake". If anything, what I have is misgivings with the Grand Strategy of science. But still these are merely personal intuitions because the subject is ultimately as huge as my own ignorance, and bafflingly complex.

  20. Apropos your question whether `bigger is better’ I was trying to make a list of the
    experiments that have changed physics. I suggest it as a game; after all,
    it is a very hot August.
    I would begin with the Foucalt pendulum; Faraday; Michelson; Eotvos; Rutherford;
    Davisson-Germer; Hahn-Meitner; Eddington; Pound & Rebka; Hulse &Taylor;...
    You are free to continue. But you will notice that, at this level, the Hubble telescope, CERN, etc, haven't discovered anything. OK, I am joking

    1. I'd go with Newton and his prism. He discovered the spectrum of light. Although al-Haytham apparently discovered this in Basra in Iraq over 600 years earlier.

  21. There is a work that contains considerations which I would deem very relevant for the subject of this post. It is

    "THE LIMITS OF SCIENCE", by Nicholas Rescher

    I perused this book many years ago. Not thoroughly, but essentially those meaty parts that most called my attention and which I felt were the core of the work.

    In one of the chapters Rescher dwelt on several ways how Science could "end" or at least grind to a halt, forever in a state of incompleteness thereafter.

    One of the ideas I can remember (a bit hazily) had to do with measures of ever-increasing precision. This one I cannot reproduce well.

    The other idea (directly related to the issue of the day) was about the EVER LARGER INVESTMENTS necessary to keep science going, at least concerning its foundations.

    Essentially he said that lifting Science to the next Level of Understanding might require (I think he used this number, but we might as well call it K) Ten times the effort/investment that had been previously put in: intellectual effort, economic effort, ...

    And after that, climbing yet another level would again entail a tenfold increase in effort and resources.

    This type of exponential* dynamics would evidently devour the resource base of any civilization, and fast.

    I am aware that this happens in almost any worthy field of endeavour*, no matter its nature. The so-called "law of dwindling returns" (a phenomenological law at best) is something that warms the hearts (?) of economists, because it allows them to bask in the delusion that economy is a hard science the way Physics is (the "law" of supply-and-demand is another of their myths; perhaps their most cherished one).

    In any case you may be a pianist, an athlete, a chessplayer or a doctor. You name it. At first you learn fast and your performance improves by leaps and bounds, but the more you learn and the more competent you become, and once all the low-hanging fruit is taken, further progress requires ever increasing efforts. Arguably exponential increases in effort if regular, successive and equally large amounts of progress are to be made.


    "(...) that economy is a hard science the way Physics is (...)"


    "(...) that economics is a hard science the way Physics is (...)"

    I realized this mistake of mine a few minutes after posting. That is the type of subconscious mental process that CANNOT BE PLANNED OR CONTROLLED in detail, but is a necessary ingredient of worthwhile insights.

  23. Dr. Hossenfelder: This post makes me curious; perhaps you will be inspired to answer with another post:

    What do you think are the most expensive experiments we could do that you believe have a good chance of revealing important results for particle physics? If YOU were in charge of big funding, what fanciful big budget experiments would YOU like to see?

    The dark side of the moon observatory? A bigger space telescope? An array of them in the orbit around the Sun?

    1. A collider the size of the multiverse!

    2. Dr. Hossenfelder: I was not aware we could do that ... (build a collider the size of the multiverse -- does the multiverse even have a "size"?)

    3. Yes, it's infinitely large. My answer was supposed to say that asking for the "most expensive" experiment isn't a particularly well-posed questions because you will end up with an infinitely expensive experiment.

    4. Dr. Hossenfelder: You are correct, that wouldn't be a well-posed question.

      Which is why I qualified my question; I was wondering about what you think are the most expensive experiments we could do that you believe would be useful.

      Meaning, achievable. If we were to do another multi-billion $ single experiment on the scale of the LHC, is there any experiment on that scale you think would be most helpful or most revealing for particle physics?

  24. I mentioned that work "ANTIFRAGILE", by N. N. Taleb. It's a thick book brimming with insight. Some ideas and concepts there are (already mentioned): VIA NEGATIVA, the AGENCY PROBLEM, IATROGENICS (not mentioned ... and not necessarily pertinent, I'm not sure), etc.

    But there is another all-important idea there:

    INNOVATION (read "discovery") CANNOT BE PLANNED.

    Taleb goes to great lengths and provides countless examples to prove that luck, or serendipity* (he uses this particular word a lot), plays a very large, even overwhelming role in discovery.

    At some moment he says that of the forty-something important pharmaceutical discoveries made in a given period of time, only 2 at most, or even perhaps just 1, were the result of usual, planned and directed R&D. All the other ones were somehow "chanced upon" in that serendipity was essential for their discovery.

    The author criticizes "planning" and "interventionism" a lot: newspapers have to be printed everyday and must have those many pages, REGARDLESS of whether there are interesting events about which to inform. Research budgets must be calculated and the money must be used even if we do not know in advance what to do with it. Etc.

    Along the book Taleb repeatedly enshrines the idea of CONVEX TINKERING as the best known path to discovery and progress. By convex tinkering I understood EXPLORATION. Even PLAYFUL EXPLORATION: there must be some darwinian reason why young humans and animals "play"; it's Nature's mechanism to teach something to newly arrived creatures.

    At some moment Taleb likens research to groping or walking blindfold, in a dark cave. You reach towards the ceiling (hence the metaphor "convex tinkering", wherein 'convex' is understood in the sense of 'convex' function in Calculus: upended cup if you wish), which is generally very low. But every once in a while you find a spot where it's higher. And "then the sky is the limit" (his words).

    You have to make many little investments (instead of a few gigantic ones), many little attempts or "bets". Most of them translate into (short term) losses or sunken costs ...

    But eventually you may succeed, and when that happens, it more than compensates for all the other previous, instrumental, minor losses which can then be construed as part of the exploration process.

    1. I don't believe Taleb. Innovation CAN be planned.

      I sat down one day with the express purpose of finding a better way to fit the Extreme Value Distribution, and in a few days, I found it.

      Edison reportedly wanted to find a filament for an electric light, and decided on a brute force approach, testing every material he and his team could think of. Over 10,000 materials, including human hair and plant fibers, before discovering tungsten would work. Certainly that counts as a "planned discovery."

      So do most oil fields, there is a systematic plan to scan for exploitable oil fields.

      Data mining, along with genetic algorithms and neural networks, are all systematic, planned approaches to discovering exploitable relationships in big data. And they work, and they are profitable. Certainly they count as "planned discovery".

      Finally we can talk about the Large Hadron Collider. There was $20B spent, in a massive plan to discover whether a mathematical suggestion (that removed an infinity), the Higgs Boson, actually existed or not. And they did discover it, and prove that indeed, it existed. The Higgs counts as a "planned discovery", just proposing a new particle as a solution to a mathematical conundrum doesn't count as a "discovery", planned or otherwise.

      Sounds to me like Taleb is cherry-picking his examples, I can think of dozens of examples of "planned discovery".

    2. I think the hard core critics for "planned innovation" mean, innovation without an idea or inspiration. Edison's light bulb isn't a good example for it anyway, because he did not invent anything. All he did was find the best material in as you say, a brute force approach. And that is a good idea to solve a problem.
      I think the whole discussion is rather fruitless, because we are going to hit each other over the head with examples by trying to prove that there was just planning or inspiration involved. I am pretty sure, that Einstein followed a path he saw before him. On step - Gedankenexperiment - after the other and then do the math to see if it fits. That's a plan, but one with an open end. The HCL was built with a plan. But within the project there where tens of thousands little or big problems to solve that needed an idea, i.e. an inspiration.
      Yes, there are R&E budgets (no infinite amount of money) and yes, somebody has to decide which idea (very difficult job) is the best to spend money on and yes, in many cases the other idea would have been better. But who cares. We get there in the end. Right now there are a hundred different vaccines in development. Five years ago companies were leaving the vaccine market, because there was no money in there anymore. Now suddenly everybody thinks it's a bonanza. Innovation is driven by whatever right then is the problem. I think it would a good thing for anybody interested about the drivers of innovation, is looking at the music business. Look how styles and stars develop, how it is sometimes planned or just an inspiration in the shower. It is exactly the same as innovation in science. Might be reveal one or two secrets. But frankly I do not care. We are by nature problem solvers and inventors. One does it planned and straight forward, the other erratically. Both get results. What we do not need is bias.

    3. Christian: It is ludicrous to claim Edison did not invent the light bulb; it is clearly an invention. It does not occur naturally! Your definition of "invention" is artificially restricted to mean only what you want it to mean, in order to exclude a brute force approach.

      Edison was most likely inspired: He imagined an evacuated bulb, to prevent oxidation, with a resistive filament that could get white hot without being destroyed. That's the invention. Does it matter what the filament is made of? No, as long as it serves the design purpose and doesn't melt. Just like it doesn't matter precisely how much power need be applied, or the size of the bulb, or exactly how thick the filament must be.

      As for innovation without any idea or inspiration, there are now radio antennae arrays designed by genetic algorithms that can boost the signal to noise ratio several-fold, allowing reception in physical circumstances where there was once none.

      The designers were not inspired by any formula for arrangement, they just scattered 16 antennae in a fixed area, computed the combined receptive efficiency, and "evolved" those configurations through hundreds of generations.

      What they have is absolutely an invention. You could say the inspiration was in the approach, not the result, which is fair. But that does not diminish the result as an invention that works: The math works out, even if nobody can compute the inverse function that would directly produce the ideal placement of 16 antennae.

      The LHC was invented. Somebody first thought "We can find the Higgs with a giant collider" and then proceed to figure out how big, where, how the detectors would work, how the data handling would work, etc.

      You say we "do not need bias," I think it indicates bias to deny Edison credit for inventing the light bulb. It didn't invent itself, and it wasn't just a field trip searching for a light bulb occurring naturally in the wild.

    4. "Edison reportedly wanted to find a filament for an electric light, and decided on a brute force approach, testing every material he and his team could think of..., before discovering tungsten would work."

      Slight correction. The first thing Edison found that was practical enough to market was soot (carbon) embedded in coarse thread. The last and best thing he found was bamboo fiber (another form of carbon that lasted longer). Tungsten filament bulbs were discovered later by another company in another country, and are still in use today, in modified form from the original.

      Light bulbs, like all other human inventions, evolved over time. What we see today is not where they started, just like modern plants and animals.

      A creationist friend once pointed to a tree with a car parked near it and asked me, "Can't you see that they both were designed?" I replied, "No, they both evolved. You've seen cars evolve in your lifetime." Edison is the primary example I use as evidence of design evolution. It would have helped him to have learned some material science (electrical resistance, melting points, thermal fatigue) beforehand, but even random trial and error works if a solution exists and you keep trying. Of course the solution you find may not be the best one, but if so others can keep on looking.

      To drag this somewhat back on topic, evolution, or trial and error, implies criteria that the solutions must meet (e.g., survival and reproduction of biology, and survival and reproduction in the marketplace for inventions). As remarked before, science works the same way, but if the criteria diverge from the basic quest for knowledge (e.g., publish or perish) the results will diverge from that also.

      (Disclaimer: just my personal view, not necessarily shared by the management.)

    5. Well Dr.A.M. Castaldo, I am really sorry to push your hero from the pedestal, but Edison did not imagine the evacuated bulb, it was Joseph Swan (a chemist) in old England who made it practical (but he wasn't the first with the vacuum either. By then people new about oxidation). He even had the patent to prove it. Edison tested more than 6000 plants(!) before settling on bamboo. That is pretty much a brute force approach. He knew about tungsten (which is a rather obvious choice), but he could not make filaments.
      Often it is about connecting dots differently to come up with a new idea. That isn't wrong or bad. Many ideas with great success were just little alterations of previous designs.
      The LHC (darn typo up there LHC not HCL) wasn't an invention either, because they new how an accelerator works. They already had one or two. But there was a whole bunch of engineering problems to solve. There are thousands of brilliant ideas (i.e. inventions) in that thing.
      The Manhatten Project often comes up as the typical huge invention, but (according to Feynman?) the physics was know it was just a huge engineering project.
      There is nothing wrong with brute force approach and neither with small evolutionary steps.
      In any project that creates something new, there is invention. But a project without a plan more often fails than not. To a certain degree you can plan invention, what you can't plan, is having an idea to solve a problem right in that moment the problem arises. Sometimes it just takes longer, a crazy mind to get over it or to connect the dots.

    6. Well Dr.A.M. Castaldo, I am really sorry to push your hero from the pedestal, but Edison did not imagine the evacuated bulb, it was Joseph Swan (a chemist) in old England who made it practical (but he wasn't the first with the vacuum either. By then people new about oxidation). He even had the patent to prove it. Edison tested more than 6000 plants(!) before settling on bamboo. That is pretty much a brute force approach. He knew about tungsten (which is a rather obvious choice), but he could not make filaments.
      Often it is about connecting dots differently to come up with a new idea. That isn't wrong or bad. Many ideas with great success were just little alterations of previous designs.
      The LHC (darn typo up there LHC not HCL) wasn't an invention either, because they new how an accelerator works. They already had one or two. But there was a whole bunch of engineering problems to solve. There are thousands of brilliant ideas (i.e. inventions) in that thing.
      The Manhatten Project often comes up as the typical huge invention, but (according to Feynman?) the physics was know it was just a huge engineering project.
      There is nothing wrong with brute force approach and neither with small evolutionary steps.
      In any project that creates something new, there is invention. But a project without a plan more often fails than not. To a certain degree you can plan invention, what you can't plan, is having an idea to solve a problem right in that moment the problem arises.

    7. JimV: Thanks for the correction.

      Christian Tillmanns: You are correct but Edison was not a hero to me; he was an exceptionally cruel asshole.

      But he was an inventor, and your correction of history does not make my assessment incorrect. He invented and demonstrated the light bulb.

      A brute force approach does not defeat the fact that he achieved the idea, that something he imagined became real and replicable.

      Your following examples don't hold. Of course there were other colliders! For Edison, there were candles and lamps with wicks. For cars there were carts and carriages. Claiming the LHC isn't an invention is like claiming the "horseless carriage" is not an invention.

      Claiming a nuclear bomb is not an invention is just as ludicrous, you might was well claim NOTHING invented by physicists and engineers is an invention, because the math is already there. Is that the standard? If it obeys the existing laws of physics then it isn't an invention? Come on, dude.

      You say "you can't plan having an idea to solve a problem right in that moment the problem arises".

      Since when is speed a criteria? And it is certainly true that we can sit down with the intent to solve a problem, without a plan, and play around with it until we find an approach that we think will work.

      Engineers and mathematicians do this all the time, and pondering for weeks or months or years while playing around with equations or designs does not diminish the invention when one of these idle investigations bears fruit.

      An invention does not have to spring fully formed from the head of Zeus like Athena. It is entirely possible to have a goal without inspiration on how to achieve it, and then seek inspiration on how to achieve it. Playing around with stuff to see what happens can BE the plan.

      Magnifying glasses are mentioned as early as 2000 years ago, magnifying spectacles soon followed, but telescopes and microscopes were not invented until 1500 years later. They had bad distortions but produced recognizable images; and the inventor(s) were unknown.

      To me that suggests strongly they were invented by somebody in the late 1500's playing around with multiple lenses to see what might happen, and discovering telescopy or microscopy, and refining it. That does not make telescopes and microscopes any less of an invention.

      Your definition of "invention" is wrong. An invention is a unique or novel device, method, composition or process. How one arrives at it is immaterial.

    8. " Dr. A.M. Castaldo5:26 PM, August 09, 2020

      I don't believe Taleb. Innovation CAN be planned.

      I sat down one day with the express purpose of finding a better way to fit the Extreme Value Distribution, and in a few days, I found it (...)"

      I (QUANTUM BIT) do not "believe" Taleb either. It's more that I AGREE with him on this topic (and several others). In fact, even though I learnt lots of things from Taleb's book, he did not have to convince me regarding this particular point.

      Innovation cannot be planned, except in a trivial way. Bear in mind that the word "innovation" has been watered down these days, because we live in the era of political correction and seemingly everyone in entitled to his/her fair share of medals, warranted or not and on the grounds of an ill understanding of the noble idea of "equality".

      Thence anyone "must" be able to innovate if he/she sets out to do it. In our time, trite vulgar stuff that does not deserve being honoured with the word "innovation" is still labeled as such despite its depressing triviality or even absurdity.

      I have seen people who, if we must believe them, seem to innovate constantly despite their evident shallowness and across-the-board illiteracy (these features seem to positively correlate with how much they fill up their mouths with the word "innovation") ...

      They wake up with supposedly innovative ideas in their mind, then they keep on innovating while shaving or dressing up, and after breakfast they are already on their way to their n-th innovation of the day ... It's plain ridiculous. If those many people were that much innovative, we'd duly be long past a technological societal singularity of sorts.

      Still, such human specimens (the tribe of Homo Innovans?) abound in the marketing industry (for obvious reasons) and increasingly but less understandably in academia. In my opinion that phenomenon (general lowering of standards, puffed up language, constant posturing, ...) is conducent to widespread mediocrity and even cultural stagnation.

      Words carry meaning. They have an energy and they should be used respectfully (with a clear awareness of their meaning and weight, that is). But some of them seem to have been thrown at the mob and are sadly now in the process of undergoing serial rape 24/7. Innovation is one of the most tragic victims of this vicious trend.

      [. . .]

    9. [. . .]

      Your example with statistics doesn´t prove anything either because it's a an instance of problem solving: problems vary a lot in depth and difficulty, you know. You may have been innovating, ok, but mildly. It's a matter of degree, not of kind.

      And yes, oil exploration can be planned, but then it's also subject to the law of dwindling returns. The effort to discover a given amount of stuff increases exponentially with successive units of accrued "gain". Or, equivalently, gain increases only logarithmically with the amount of effort put in.

      For oil and quite sadly, it is even less than that, for evidently there is only a finite amount of oil in our planet. This (impending but hardly publicized Peak Oil) is directly related to our mounting economic and social problems.

      For the record, the best years for oil discovery were the 60's. Back then, 6 barrels of new oil were discovered for every 1 barrel of oil that was used up and the world was in the midst of a fossil fuel bonanza. Now the situation is the opposite: only 1 barrel of new oil is discovered for every 6 barrels of more that are consumed. Yes, consumption is much higher now, but still.

      Drastically increasing investments in oil exploration and its planning won't drive up output in proportion, but just shrink or altogether eliminate the profitability of oil companies.

      [. . .]

    10. [. . .]

      Regarding data mining and neural networks, I am a fan of the latter approach since at least 25 years ago and have always believed in the potential of that technology. I feel in fact vindicated by the successes of the last years in the field now rebranded as "Deep Learning". Bear one thing in mind, though:

      Absent a universal algorithm (which may or may not exist) with which to train neural networks, you have a very serious problem with dimensionality there; in a classical neural net each synapse and its corresponding weight can be thought of as an axis in an abstract, mathematical space. An error space. Think of it as a scalar field if you wish.

      And you want to spot a local or hopefully even global minimum in that space, typically through gradient descent. But that is not easy, nor is it straightforward or fast.

      Much depends on the topology of the network, and even choosing the number of synapses or their initial weights (the coordinates of the starting point in the aforementioned math space) is a problem that has not been satisfactorily solved yet. Pick too many or too few synapses, and the device won't work as due, for it will either simply "memorize" the examples (without generalization or the capacity to extrapolate its knowledge beyond the training data set), or else will prove unable to learn anything at all.

      Similar caveats must hold for genetic algorithms (dimensionality is the key word).

      I'm not saying that these are dead-ends. In fact, I think just the opposite is true. They're terrific tools and might boost our capacity to innovate (if we don't become intellectually even lazier, as if to compensate for the power of the newly acquired tools).

      What I mean is that they're not plug-and-play methods. At least not for the time being. And even in the future, they may not end up amounting to an all-seeing Oracle that can invent any conceivable device (the optimal one) for us, tailor solutions for any describable situation, and answer all our questions (might some quantum magic change that?), whether technical or existential.

      Even in such a scenario, you might still have to content yourself with heuristic, sub-optimal solutions that in any case may be more than sufficient to deal with the problems at hand.

      [. . .]

    11. [. . .]

      Your example regarding the discovery of the Higgs boson doesn't prove your thesis either. In fact and in spite of my flimsy knowledge of particle physics, I'd risk to say that in essence it can be compared to the discovery of Neptune by Adams in England and Leverrier in France, almost at the same time.

      None of those men "saw" Neptune through a telescope. Instead, they discovered in "on paper" in that they inferred the existence of that body from observed gravitational anomalies in the motion of the known planets (more than anything Uranus, I surmise), and after laborious mathematical calculations verging on sheer heroism, both were able to "predict" in which patch of the nightsky Neptune should be on a given date.

      Some astronomer in Berlin, I think I remember, directed a powerful telescope at the indicated area of the sky and with the indicated timing and ... Gotcha! There it was: the yet-to-be-christened Neptune, it all its heavenly glory.

      Seemingly it was the French data (Leverrier) that were used in the Prussian observatory. Adams had completed his calculations earlier, in England, but was dismissed and consequently delayed by the archetypal incompetent bureaucrat who was there to spoil things and thus "help" the French win this race, although I think in this case the posterior historical narrative has done justice and given due credit to both scientists.

      My point is that Neptune was essentially "discovered" the very moment the anomalies in the orbit of Uranus were detected: if Newton's gravitational paradigm was right, something had to be disturbing the planet, gravitationally, that is.

      Neptune was therefore "implicit" (Latin for "wrapped up", same way how 'explicit' means "unwrapped") in those observed weird details of orbital motion. Adams and Leverrier "simply" worked out the necessary details so that the thing could be brough to sight, quite literally.

      I assume that the Higgs boson discovery was of a similar nature. Of course, in this case I am not familiar with the finest features of the pertinent mathematical theory, far more abstract and elaborate than Newton's mechanics. But the feeling is that even beforehand, there were solid grounds to presume that such a thing as the Higgs existed. And that of course made the LHC investment less risky and more warranted.

      [. . .]

    12. [. . .]

      So no, I do not think Taleb is cherry-picking his examples. It's that you are not bearing in mind what we might call "depth" of prospective discoveries. Edison knew enough about electricity to imagine that a resistance (a filament) encased in a glass housing and powered with an appropriate voltage might make for a novel and very handy, versatile and useful lamp, plus a new field of technological and industrial interest that would come with it.

      The underlying principles (basic electricity, Ohm's Law and Joule's Effect which were vintage knowledge even at the time, etc.) were well know and Edison was venturing into the unknown (much to his credit) and certainly innovating, but only so far, because (I repeat) seeing beyond a given depth entails explosively increasing amounts of effort/insight, and thus rapidly becomes a practical impossibility.

      So yes ... Edison could invent the light-bulb, Archimedes could invent the lever, and somebody in the 90s (it seems) invented the wheeled-suitcase (easy, but nobody had thougt about it, or set out to do it, before). And it's all good and well. But Galileo could not have imagined lasers and Archimedes could not have envisioned radio technology, holograms or nuclear power(as a matter of fact, he didn't even invent a positional number system, in spite of his immense intellect and his interest in numbers), because those contraptions were "too many steps down the road" and thence EXPONENTIALLY more and more out of reach. Even for such giants.

      Come to think about it, the case in history that seems to me like an almost superhuman feat of discovery is Newton's work in dynamics ... He stated his laws of motion (fashioning the central concept of Force along the way, arguably his key contribution to Science), elicited how gravity works, and invented a whole new branch of mathematics in order to frame it all in a quantitative, operational way. This on a dare and amongst other discoveries and activities (Alchemy and Biblical exegesis seem to have drawn the lion's share of his interest). Impressive if you think enough about it. Yeah, he had stood on the shoulders of giants (Kepler and Galileo, to start with), as the famous quotation says, but even thus, his immense and disrupting achievements might well qualify as the Everest of human insight and achievement so far. At least this is my sentiment.

      Still, none of this debunks what we said about planning. Of course, NOT PLANNING is an ERROR, and very grave. But expecting too much from planning (which is easy and quite common) or even downright overplanning things, are ERRORs, too. And I might provide countless examples of both types, many of them even from my own experience.

      Plan, of course. And work hard. But also be flexible. Be open-minded. Cultivate awareness. Keep your eyes open for oddities and the unexpected. Entertain ample interests. And by all means avoid tunnel vision and narrowness of the mind.

    13. Quantum Bit: You say "Words carry meaning."

      Yeah, and you don't get to redefine them because their definition doesn't fit your current thought.

      Nor do you get to declare that inventing a new mathematical approach is a "mild" innovation; I invented that for the FAA; specifically as part of a package to improve the prediction of part failures and inspection schedules, and it is still in use today keeping aircraft from falling out of the sky.

      Despite the importance of correctly fitting that statistic, nobody had done it before me. It could not be solved by previous means, I invented a new method just for it. Just because you don't personally know the impact of an innovation or the history of it doesn't make it "mild", your label of "mild" just reflects your ignorance.

      >Neptune was essentially "discovered" the very moment the anomalies in the orbit of Uranus were detected: if Newton's gravitational paradigm was right, something had to be disturbing the planet, gravitationally, that is.

      Bullshit. "Dark Matter" is just a label for gravitational anomalies, and by your logic whatever causes them is already discovered. Clearly identifying something you don't understand is not discovering the cause of it!

      A discovery requires proof, and nobody has any proof that Dark Matter is actually matter.

      Identifying and clearly delineating an anomaly is a discovery, that is true. That is where we are with Dark Matter. But finding a cause that clearly explains the anomaly is a separate and distinct discovery, and you don't get to diminish that event by hand waving.

      In the case of the LHC, Higgs imagined and clearly delineated a potential particle that could explain a mathematical conundrum in the quantum model of his time. That was not the discovery of the particle! New particles are often proposed by physicists, whole regimes of super-symmetric particles are proposed, strings, gravitons, wimps, monopoles.

      The LHC, using Higgs discovery of a plausible solution, discovered the Higgs Boson, that was a separate discovery.

      Newton did not invent Calculus, either; Leibniz did that and published first, and we still use Leibniz notation. Newton was a credit hog, claiming that his suggestions and speculations that Leibniz may have seen count as "inventing" calculus, and using his power as President of The Royal Society to hold a sham "trial" in which Newton himself wrote the entire conclusion in his own favor.

      Your logic is bullshit, you haven't thought this through.

      As for your life advice, don't think I'm stupid enough to fall for the "wise elder" attempt to suggest superiority. I don't accept fallacious appeals to authority, and using them is just more evidence that your logic is wanting.

  25. Dr. Hossenfelder,

    I completely agree with your position, but for a different reason. I have and old friend that always use to tell me "I am not a weatherman, but I can sure tell you when it is raining." I only have undergrad degrees in physics and math, and as retired cop I can sure tell you when their are "holes in your story." The standard model is a marvel of modern science, but it has a number fundamental holes, why not fix these things instead of trying to find new things.

    Why does beta decay only involved lefthandness? Is the standard model saying that righthandedness doesn't exist at all or only for beta decay? One of my favorites, quarks come in three colors (QCD) that means there are three different 'down' quarks based on their color. This in turn means that there has to be three different varieties of protons, same with the neutron and 'up' quarks. Now lets add spin into the mix. Lots of varieties of two things now. Do all of these varieties actually exist or does the standard model only work for a specific combination and if so why?

    We could also talk about the second and third generations of leptons and quarks.... and the lepton "charge" or number. This too is interesting. Increasing energy is not going to give us any more time to look at so many of these things, so why not take the time to make sure that we really have the basics down with the standard model we have now before making it even more complicated.

    Thanks Dr. Hossenfelder.

    1. Steve,

      The standard model may just be how nature is, and that's that. None of the points you raise require an answer.

      "One of my favorites, quarks come in three colors (QCD) that means there are three different 'down' quarks based on their color. This in turn means that there has to be three different varieties of protons, same with the neutron and 'up' quarks."

      Quarks don't just sit together next to each other in a hadron, they're held together by gluons, which are not color neutral, so the only thing we know is that the whole hadron is color neutral. Ergo, there's only one proton. (And one neutron, and so on.) Don't you think someone would have noticed if that wasn't so?

    2. When I was a kid, I learned to play several musical instruments. I learned to play trumpet, and it occurred to me there were just 8 configurations for fingering the valves. Similarly, there are 8 configurations of pairs of color charges on gluons. With plain colors there are 6 pairs of (r, b, y), but with anti-colors there are two additional eigenstates. The 6 configurations permit the gluon to swap the colors on quarks, and the additional 2 with a trace such as rr-bar + bb-bar - yy-bar correspond to additional particle-antiparticle processes. The 6 color configurations are an irreducible representation of the roots of SU(3), while the additional 2 are weights.

      The proton with (uud) quarks does have them with different colors (rby). The presence of the three colors is neutral. We are used to thinking of a charge and its opposite as neutral. With QCD there is a triality that is neutral. Reichenbach proposed an alternative logic with a cyclicity of NOT or complementary operations, where three of them was neutral. QCD is similar in that way.

      I agree we can ask questions the standard model does not address. I will not weigh in on any putative answer to these questions. Currently, we really do not know. It might be the standard model is all there is to elementary particle physics. In part the lightness of QFT particles and the value of the Higgs mass might be what permits QFT in an IR setting. If so there is then a particle physics desert potentially up to quantum gravitation. If particles were all near the Planck mass then particle interactions would involve quantum gravitation, as the coupling is effectively GE^2, but then nonlinearity of gravitation might block quantum physics. Maybe instead of quantum particles the universe would be just filled with black holes. We then maybe have this sort of consistency of QFT by the small mass of the Higgs particle. Curiously, this mass is near the upper limit of what it can be. If it were more massive the φ^4 interaction of the Higgs would pop it up to the Planck scale. I wonder if the ratio of the Higgs mass and the Planck mass form a fundamental number similar to the fine structure constant.

      It is frustrating to ponder. It would be nice to look a bit beyond this barrier of experimental ignorance. Nothing settles these matters better than actual data. If all we get at 100TeV or if we could push this to 1000TeV is just standard model then we have a sense there may be a desert leading far up the UV scale. That would tell us something. If on the other hand, there are multi-TeV energy SUSY or sphaleron physics that too would be interesting.

      I think new ways of accelerating particles is needed. Building colliders that are ever larger will lead to a dead end. We might be there already. Oh yeah, BTW I never got very good at the trumpet. I settled in with woodwind instruments and the piano.

    3. Lawrence

      A comment about you saying that r + b + y is neutral in colour, which is true but IMO not the whole story.

      It is interesting that you make an analogy of QCD colours with music. I am an amateur artist and always make the analogy with colour. You use r, b and y as the QCD colours [presumably to avoid 'g' so as not to confuse Green with Gluon]. These are painters' primary colours and sum to black (or brown if you fail to achieve the sweet spot of black). Adding the three light primary colours gives white instead of black and this is relevant to my point. White and black are both colour neutral but one needs to use one palette or the other to find a neutral position and not both light and paint palettes simultaneously.

      This is also reminiscent of Lorraine's point about a single bit not being much use without knowing what to do with it. (Or some such argument. I did not follow closely the AI/ consciousness comments on another thread.)

      If R, G and B are (light) colour QCD primaries then R + G + B is white, as is R + R'. But how do R, G and B know how to add up to white? My preon model has no braiding of the kind sometimes found in preon models. But there is QCD colour in my preons and even worse there must be a precursor colour system (within the preons) to impose the condition that R + G + B = White.

      Say we start with precursor colours (r, g and b) which have no group structure, and combine them to impose a group structure on RGB as follows:
      R = rg'b' , G = r'gb' and B = r'g'b where apostrophe denotes anticolour.

      R + G + B = White
      therefore (rg'b') + (r'gb') + (r'g'b) = White
      which rearranges to rr' + gg' + bb' + (r'g'b').
      The rr', gg' and bb' are exactly neutral and can be ignored.
      That leaves White = r' + b' + g'.

      By a similar process, R' + G' + B' = r + g + b. This cannot be white as r + g + b does not equal r' + g' + b'. As r, g and b are precursor colours and have no group structure they cannot be added to neutral using relationships within themselves. But one could say that if r'g'b' is White then RGB should be black.

      So RGB is colour neutral and R'G'B is colour neutral but one is white and the other is black. Q.E.D.? (In more ways than one.)

      Austin Fearnley

    4. Lawrence,

      For discussion purposes, it appears to me that your response supports my position. "Putative" answers to questions are not, or should not, be considered as final or factual answers when it comes to the standard model that is being held up as the one of the grand achievements of modern physics. I admit that due to my background I have a unique way of looking at things, but in my past line of work, if you had any questions you were not done. I firmly believe that this should be true for physics also as we have a whole universe to learn about.

      Again, I admit that I have a different background for this type of discussion, but that does not make the questions I ask invalid or unworthy of a definitive answer. The second and third generations of quarks, they are there, they have been discovered, but as it stands now they simply exist for very short periods of time in a limited number of combinations. We could have a lengthy discussion on this alone. The bottom line, I think we owe it to ourselves to have definitive answers before moving on to a lot more questions.

    5. How colors add and make other colors is a bit of a strange story. It pursued by Edwin Land who founded the Polaroid Company based on polarizing polymers in lenses that blocked the normal D field and reduced glare. He studied the physics of color and the subject is still not fully explored. It seems it would be, but there is a lot involving the brain and perception. The combination of blue, red and yellow light makes a white light. The combination of pigments that absorb all colors except these gives black or dark brown. How colors add to make other colors is similar to how different notes add to make chords. This is one reason I like to use yellow instead of green, even though the color green when I was a kid was my favorite and I still and rather disposed to it.

      I also like to use yellow, because in a putative SU(4) theory there are additional colors. The dimension of SU(3) is 8 and SU(4) is 15, which is right to consider additional colors O, G, P. The Bern et al concept of the graviton as a colorless pair or entanglement of "gluons" might work this way. These would be an STU dual theory of QCD of very weak interactions.

    6. I saw an amazing TV science program 30 to 40 years ago by Land on colour theory. One of the best science programmes ever, for me, and I still remember visual scenes in it such as in a doorway comparing sunlight outside with electric bulb light inside. I had already learned some colour saturation theory (Chevreul) but this was terrific especially the role of the brain/mind. However the brain is not required to combine QCD colours so I will drop that here. I have seen papers using four colours for QCD but did not pursue them as I did not know how quirky this idea is. I have other points but am in danger of going too far off the thread topic.

      Austin Fearnley

  26. Another small experimental technology that I like is the idea of self-digging robots based on moles. The idea being we could set them loose and they could dig tunnels through which we could transport goods.

    That would certainly cut down on congestion and carbon emissions.

    I thought that this was just a free-wheelin' technological speculation of mine. But then I came across some people - I think it was in Korea - that have built a small robot doing exactly that! Their promotional video even featured moles!

  27. In the early years of science the narrative was that the universe is deterministic, science would explain this determinism, and the science could be comprehended by the average educated person.
    All the ore has been mined but the answers are still not there. It is human nature to keep returning to the "memories of yesteryear" where, like Lake Woebegone, life was perfect. So our modern physicists, remembering so much success of the early years, keep returning to the mine to find additional ore; only to find there is no ore. But they keep looking.

    Some have realized this and have proposed all sorts of metaphysical realities, rejecting the very concept of an objective reality on which science is based, justified by ever increasingly, incomprehensible math.

    This latter stage tells us about our minds. The human mind is designed to need answers. At the minimal, these answers must be explanatory. This explanation must not contradict what is known or what is commonly accepted as a truth of reality. This is the function of myth.

    Heretofore, science incorporated this but went one step further: it generated hypotheses that can be falsified; and further modified to incorporate increasingly precise view of reality. In so doing, it lead to predictive power (in areas that are not clearly related). Occasionally it leads to the resolution of paradox and a deeper understanding of reality.

    So where do we find ourselves with modern physics? I am not a physicist, and I can't even pretend to understand some of the newer physics, but it appears to be that we are in the myth stage.
    And our physicists, needed to maintain meaning and purpose in their lives, keep returning to the glories of yesteryear, in the hopes of finding some yet to be discovered nugget.

    1. Maybe we're at a stage where all we can do is create new myths, of different sorts, with varying appeal; and hope that at some point new "science" falls into place behind one of them.

    2. jim_h
      Struggling with little success to understand the conscious from the pov of physicists I found your web page. Your photos are amazing.

    3. As is your bio, KWPD.

      IMHO, consciousness cannot be "understood" in the context of physics. In fact physics today seems driven to exclude consciousness from discussion entirely.

    4. Which only means current Physics is INCOMPLETE and badly needs being revamped.


      "The day science begins to study non-physical phenomena, it will make more progress in one decade than in all the previous centuries of its existence".


    5. jim_h, Quantum Bit,
      This blog encourages dialog about modern physics, and helps engineers understand a little about research beyond the simplifications where we can actually apply it, In even touching on understanding the conscious, it inches towards adding the insights of the greatest engineers of all, e.g. Tesla. May it be so.

  28. We recently made the biggest telescope in history and obtained a nice portrait of a black hole.

  29. In his typically mind-boggling novel "Diaspora", occasional physicist Greg Egan describes a linear electron collider 140 billion kilometers long, with a collision energy of 12,600 billion trillion eV, used for wormhole engineering. It did take a cosmic disaster that killed a million individuals to generate consensus to fund the project, though.

  30. Is there any estimate for the experiment that would leave atomic clocks in Earth's orbit around the sun? I assume that launching them would be by far the most expensive part however, I think multiple clocks could be launched per rocket? I'd speculate that a lot of useful data could come out of that experiment beyond just gravity waves in regards to the malleability of spacetime in our local space.

  31. You must have forgotten that we already have lots of atomic clocks in "Earth's orbit around the sun". Each navigation satellite carries at least two of them.

    1. In Earth's orbit around the sun. Not in the orbit of Earth.

  32. The fundamental forces of nature are disposed to form conditions where self-auto amplification occurs. Gravity will produce a black hole. The strong force will self-auto amplify through the neutron based chain reaction. The electroweak force shows indications that Bose condensation of Dirac spinors if properly formed and pumped will produce a electroweak singularity in which the weak force and the electromagnetic force recombine into a state of electroweak unification. There are experiments that show that this electroweak unification is occurring. This mechanism that supports this spinor condensate singularity is metal nano and micro particle interaction with light. The most powerful auto self- amplification occurs in cases where nanoparticles are Hole superconductors. These nanoparticles are metastable, coherent, and can support a state of auto self-amplifying electroweak unification for years… no pumping required. The behavior of these spinor condensates are consistent with the theories proposed for GUT: weak force action at a distance, proton decay, destabilization of matter, instant isotope stabilization, and transmutation of elements.

    I am looking forward to the day when science sees fit to use these natural occurring condensed matter mechanisms to look more deeply into the secrets of nature that now are not available to existing methods of examination.

    1. Axil,

      "(...) The fundamental forces of nature are disposed to form conditions where self-auto amplification occurs. Gravity will produce a black hole. The strong force will self-auto amplify through the neutron based chain reaction (...)"

      I used to think that what is unleashed in a neutron-based chain reaction (an A-bomb, I surmise) is electrical forces ...

      Some nuclei ("nuclear species", more technically) such as U-235, Pu-241, etc. are on a precarious state of balance between the nuclear forces holding them together (the neutrons glueing the protons together), and the electrostatic forces of repulsion that try to push the protons apart.

      All it takes for lumps of that stuff is some slow (thermal) impinging neutron, more or less at the speed at which gas molecules move in the air on average, and that the lump be sufficently massive and with the right geometry (compact, please): kabooooooom!!! Obviously there are engineering tricks to this. You must somehow achieve supercriticality in the blink of an eye.

      But in spite of the common terminology, the A-Bomb is more justifiably "electrostatic" than it is nuclear, even if it's stray nucleons (neutrons) that trigger the chain reaction.

      At least this is what I understood in Feynman's books a long time ago.

  33. On the AI/brain function topic-- glial cells are too often ignored. Traditionally viewed as passive support structures, they might well play a role in brain function. Nerve cells also exhibit wave like depolarization along the cell membranes that are below the threshold for the off on binary signaling but it is not clear that they play no significant role in the complex activity that the brain performs. There could be a mix of analog and digital activity in the brain. The analog activity might account for some of the properties regarded as emergent. In any case, if there is anything the brain can't model we aren't going to be aware of it. So compare the set of all things the brain is capable of modeling to the set of all things that exist. Is there a perfect correspondence? One-to-one? That would be rather surprising.

  34. Is there any known mathematical derivation which enables the double-slit phenomenology to be derived on the basis of the position-momentum uncertainty principle and the spatial measurement limitations which are a consequence thereof? Or at the least, is there some sort of consistency calculation which demonstrates a logical connection between uncertainty and double slit?

    I tried to post this under but my browser would not let it through. I am guessing that perhaps new comments on a post are foreclosed after a period of time?

    1. Jay,

      If a comment thread exceeds 200 comments, you have to clock on the "load more" button on the bottom of the page to post a new comment (sometimes repeatedly). We have all been waiting since forever for Google to fix their comment sections but it's still not working properly. Sorry about that.

  35. I did graduate in 2001 in particle physics and decided to not follow the academic path because I already thought that the LHC produces particles which will exist far too short to be ever studied in detail. I asked an other to be PHD about my doubts and he replied that this are "ketzerische Fragen, die man besser nicht laut stellen sollte". So much for academic freedom. That only confirmed my decision.
    The topic is hugely fascinating but the answer cannot be to build bigger and bigger machines which will never produce anything that will be remotely useful, except for a few experts. Research projects needs to pay dividends to society by promising new useful technology if it consumes a significant amount of economic resources. Astronomy with the OWL has the same issues as the LHC. The next one needs to be even bigger. Where will this stop?

    1. I can confirm that particle physicists tend to strongly discourage discussing whether their research has any use. The question is basically considered taboo. If you dare raising it you will be called "anti-science".

      This lesson is taught early to students and sits very deeply in the community today. You see this vividly in the absence of any discussion about whether building larger and larger colliders makes any sense: There isn't any discussion because the question whether the expense is justified is considered heretical.

      Fwiw, this kind of behavior is a hall-mark of group-think.

    2. The applied impact of particle physics does not come that much from the particles, but from techniques. The proton treatment of cancer is one case. A small proton accelerator, using essentially technology developed to make particle accelerators, is a means of treating cancer. There are a few other instances where detector technology gets other use.

    3. There are several practical applications. One of my favourite ones is sealing milk cartons, which entails causing the glue to set or polimerize faster by means of a directed electron beam. Physicists be proud of this.

    4. "ketzerische Fragen" ... ROFL


      I thought we were partly joking with the "myth" accusations and all that, but definitely Particle Physics (or Science in general) seems to have degenerated into some sort or religious stablishment, with a hierarchy (clergy), magic thinking, bureaucratized scientists that look like politicians or priests (what would be worse?), a Pope of science at the apex of the pyramid, etc.

      I can hardly believe my eyes now ...

      Is it perhaps impossible for humans to invent some type of structure or theoretical system that does not eventually morph into an organized "religion"???

      It happened with Communism, it's a fact with Neoliberalism now, it happens with every type of nationalism worth its salt, and there are symptoms of many other paradigms and -ISMs also becoming systems of religious belief that cannot be contested or questioned in the rational arena.

      But the BIG MYSTERY (does the hierarchy of particles keep on going forever? Are there patterns to it? ...)is still there and it is FASCINATING ... The most interesting cryptogram in the Universe is the Universe itself.

      Yeah, build an accelerator around the equator ... WHY NOT? After all the USA seems to be spending circa $700,000,000,000 yearly in terms of "military budget". More than anything paying pathologically obese people that sit at a desk all day long and write nonsensical stuff that is passed off as surveying.

      I'd rather prefer using those resources to probe the entrails of Reality, to ask direct (experimental) questions to the Universe that engendered us and to sort out many other things in the realm of science and technology. Recently it was shown that they couldn´t even produce crude masks to curb Covid-19 contagion ... How pathetic.

      Build a supergigaAccelerator that produces cascades of even more ethereal particles and thus more data for the computers to munch on; or build an artificial megasuperBrain that can spot in the data patterns of superhuman subtlety that elude our poor simian brains; or perhaps go all-in for communication with extraterrestrial civilizations; I don't care. Flip a coin and pick an option. You can't fail with any of these.

    5. I am now working in the medical software field where I can make a difference to many people using these products. Proton therapy has its own issues. Better would be carbon ions to get a shaper Compton Peak. There is still a lot of research to be done to make it mainstream. What is missing most (at least that did tell me someone who did study medical physics) is a working compton camera to be able to only to model where the beam is depositing its energy but also to measure it during application to be able to correct the beam focus/position in realtime. For moving parts (you are still breathing right?) like liver tumors this essential. In principle it works but at least Siemens Medical was not able to make it profitable for whatever reasons. If plasma accelerators will be able to cut down costs significantly remains to be seen. What you call a small accelerator costs as turnkey project still around 100 million Euros. I am not saying particle physics has no practical applications. But at extreme energies where accelerators produce particles which will fly a new nm before they decay will never be of any practical use except we then know that they did exist by looking at their decay jets.

    6. Sabine, don’t you get tired with the comments of an increasing number of science-bashers under your articles against Big Science ?

    7. Thomas,

      This comment section, unfortunately, is a permanent headache. I want to be available to answer questions on my threads but the vast majority of people come here instead to dump opinions about everything and anything. It's arguably not working very well.

    8. Dr. H.
      "Comments should relate to Blog topics. Please discuss personal theories elsewhere."
      Your headache remark is alarming. Adding something like the above as the first sentence of Comment Rules might help keep us on topic.

  36. Quantum Bit:

    You have reached the end of my patience.


COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.