Pages

Sunday, June 21, 2020

How to tell science from pseudoscience

Is the earth flat? Is 5G is a mind-control experiment by the Russian government? What about the idea that COVID was engineered by the vaccine industry? How can we tell apart science from pseudoscience? This is what we will talk about today.

Now, how to tell science from pseudoscience is a topic with a long history that lots of intelligent people have written lots of intelligent things about. But this is YouTube. So instead of telling you what everybody else has said, I’ll just tell you what I think.

I think the task of science is to explain observations. So if you want to know whether something is science you need (a) observations and (b) you need to know what it means to explain something in scientific terms. What scientists mean by “explanation” is that they have a model, which is a simplified description of the real world, and this model allows them to make statements about observations that agree with measurements and – here is the important bit – the model is simpler than just a collection of all available data. Usually that is because the model captures certain patterns in the data, and any kind of pattern is a simplification. If we have such a model, we say it “explains” the data. Or at least part of it.

One of the best historical examples for this is astronomy. Astronomy has been all about finding patterns in the motions of celestial objects. And once you know the patterns, they will, quite literally, connect the dots. Visually speaking, a scientific model gives you a curve that connects data points.

This is arguably over-simplified, but it is an instructive visualization because it tells you when a model stops being scientific. This happens if the model has so much freedom that it can fit any data, because then the model does not explain anything. You would be better off just collecting the data. This is also known as “overfitting ”. If you have a model that has more free parameters as input than data to explain, you may as well not bother with that model. It’s not scientific.

There is something else one can learn from this simple image, which is that making a model more complicated will generally allow a better fit to the data. So if one asks what is the best explanation of a set of data, one has to ask when does adding another parameter not justify the slightly better fit to the data you’d get from it. For our purposes it does not matter just exactly how to calculate this, so let me say that there are statistical methods to evaluate exactly this. This means, we can quantify how well a model explains data.

Now, all of what I just said was very quantitative and not in all disciplines of science are models quantitative, but the general point holds. If you have a model that requires many assumptions to explain few observations, and if you hold on to that model even though there is a simpler explanation, then that is unscientific. And, needless to say, if you have a model that does not explain any observation, then that is also not scientific.

A typical case of pseudoscience are conspiracy theories. Whether that is the idea that the earth is flat but NASA has been covering up the evidence since the days of Ptolemais at least, or that 5G is a plan by the government to mind-control you using secret radiation, or that COVID was engineered by the vaccine industry for profit. All these ideas have in common that they are contrived.

You have to make a lot of assumptions for these ideas to agree with reality, assumptions like somehow it’s been possible to consistently fake all the data and images of a round earth and brainwash every single airline pilot, or it is possible to control other’s people’s mind and yet somehow that hasn’t prevented you from figuring out that minds are being controlled. These contrived assumptions are the equivalent of overfitting. That’s what makes these conspiracy theories unscientific. The scientific explanations are the simple ones, the ones that explain lots of observations with few assumptions. The earth is round. 5G is a wireless network. Bats carry many coronaviruses, these have jumped over to humans before, and that’s most likely where COVID also came from.

Let us look at some other popular example, Darwinian evolution. Darwinian evolution is a good scientific theory because it “connects the dots” basically by telling you how certain organisms evolved from each other. I think that in principle it should be possible to quantify this fit to data, but arguably no one has done that. Creationism, on the other hand, simply posits that Earth was created with everything in place. That means Creationism puts in as much information as you get out of it. It therefore does not explain anything. This does not mean it’s wrong. But it means it is unscientific.

Another way to tell pseudoscience from science is that a lot of pseudoscientists like to brag with making predictions. But just because you have a model that makes predictions does not mean it’s scientific. And the opposite is also true, just because a model does not make predictions does not mean it is not scientific.

This is because it does not take much to make a prediction. I can predict, for example, that one of your relatives will fall ill in the coming week. And just coincidentally, this will be correct for some of you. Are you impressed? Probably not. Why? Because to demonstrate that this prediction was scientific, I’d have to show was better than a random guess. For this I’d have to tell you what model I used and what the assumptions were. But of course I didn’t have a model, I just made a guess. And that doesn’t explain anything, so it’s not scientific.

And a model that does not make predictions can still be scientific if it explains a lot of already existing data. Pandemic models are actually a good example for scientific models which do not make predictions. It is basically impossible to make predictions for the spread of infectious diseases because that spread depends on policy decisions which themselves can’t be predicted.

So with pandemic models we really make “projections” or we can look at certain “scenarios” that are if-then cases. If we do not cancel large events, then the spread will likely look like this. If we do cancel them, the spread will more likely look like that. It’s not a prediction because we cannot predict whether large events will be canceled. But that does not make these models unscientific. They are scientific because they accurately describe the spread of epidemics on record. These are simple explanations that fit a lot of data. And that’s why we use them in the first place.

The same is the case for climate models. The simplest explanation for our observation, the one that fits the data with the least amount of assumptions, is that climate change is due to increasing carbondioxide levels and caused by humans. That’s what the science says.

So if you want to know whether a model is scientific, ask how much data it can correctly reproduce and how many assumptions were required for this.

Having said that, it can be difficult to tell science from pseudoscience if an idea has not yet been fully developed and you are constantly told it’s promising, it’s promising, but no one can ever actually show the model fits to data because, they say, they’re not done with the research. We see this in the foundations of physics most prominently with string theory. String theory, if it would work as advertised, could be good science. But string theorists never seem to get to the point where the idea would actually be useful.

In this case, then, the question is really a different one, namely, how much time and money should you throw at a certain research direction to even find out whether it’s science or pseudoscience. And that, ultimately, is a decision that falls to those who fund that research.

152 comments:

  1. "Multiverse theories are utterly conventionally scientific, even if evaluating them can be difficult in practice."
    Sean Carroll

    What is "utterly conventionally scientific"?

    ReplyDelete
    Replies
    1. No, they are not. I have made several videos in which I explain this. Multiverse theories violate the central assumption of science which is "make no unnecessary assumptions".

      Delete
    2. There are pseudoscientists everywhere.
      Even at " top universities. "

      Delete
    3. I have to say I agree more with Phillip on this. The existence of other cosmologies or the multiverse comes from inflation. It is a derived result from inflation, or maybe better put a general setting for inflation. Inflation has some decent support with ΛCDM prediction of CMB spectra. It is not a finalized result yet, but data support continues to mount.

      Delete
    4. I look forward to you and Sean Carroll duking it out some time over what exactly makes an assumption "unnecessary".

      Delete
    5. Lawrence,

      "The existence of other cosmologies or the multiverse comes from inflation. It is a derived result from inflation, or maybe better put a general setting for inflation."

      No, it does not. What you say is just bluntly wrong. You are confusing mathematical with reality. Inflation is a mathematical theory. The mathematical structures only "exist" in the sense that they correspond to what we observe. If they do not correspond to anything we observe, their "existence" is an additional and unnecessary assumption. This makes it unscientific.

      Let me repeat this once again: Claiming that something which we cannot observe "exists" is unscientific. It is unscientific.

      This is not even a matter of opinion. That insisting on superfluous assumptions is unscientific is one of THE core principles of science. If you drop it, god is scientific.

      I have explained this all many times before and frankly it blows my mind that I have to constantly repeat this.

      Delete
    6. Andrew,

      There is nothing to be discussed here. If assumption A leads to the exact same predictions for observables as do assumptions A and B, then B is unnecessary. The assumption that universes which we cannot observe do exist is unnecessary, trivially, to explain anything we can observe, hence it is not scientific.

      It's not a complicated argument and it stuns me that so many theoretical physicists are unable to grasp it. My best explanation for why that is is that they're embarrassed it didn't occur to them earlier.

      Delete
    7. It must be remembered black holes were considered math fantasies until around 1970. Gravitational waves were also thought to be fiction, even after Feynman gave his moving beads argument. Gravitational waves were only detected in 2015.

      The multiverse is hypothetical. It could also have observable consequences with the spacetime cosmology we observe. Its status might be compared to that of gravitational waves in the 1950-60 time period.

      The status might change in time. Calling something scientific or not is a judgment, where I would at least say the multiverse is more scientific than UFOs.

      Delete
    8. If a "black hole" is literally "a location in spacetime where the gravitational field of a celestial body is predicted to become infinite by general relativity" (Gravitational singularity) then I haven't seen where that actual and physically-real "infinity" (pure math blackhole) prediction has been verified. Maybe "blackhole" and "multiverse" math just lead to more complex and physically-justifiable math where there is no actual (physical) infinite and no actual (physical) other universes.

      Again, mixing the math and with the real world can be done done way too sloppily.

      Delete
    9. The infinity with a singularity indicates some incompleteness with general relativity. While an observer will have an unpleasant experience of being tidally pulled apart on approaching a singularity it likely does not diverge to infinity. So e quantization or quantum physics should remove this infinity.

      Delete
    10. Sabine,

      There's no need to reiterate your bromides, I got them the first time. My point is that as non-physicist there is no easy way for me to choose between your point of view and Carroll's, so I'd like to see a debate.

      Delete
    11. Some people are truly lost in Math; for these people if there is no theoretical framework supporting empirical observations then these observations are irrelevant; by the way that is the very definition of dogmatism. Implicitly these people are assuming that their knowledge(orthodoxy) of the world is "complete".

      The reality of unexplained phenomena is very real, but rigid mindsets are utterly incapable of grasping that, the case of lightning sprites is one example that may had been reported as UFOs, but these super math obsessed individuals never will get beyond their bubbles and observe Reality, if left only to them lightning sprites never would have been directly observed. There is always something in Reality beyond the limited imagination of theoreticians. As Feynman said: Science is the belief in the ignorance of "experts".

      Delete
    12. Andrew,

      It's not about "choosing a point of view". It's a question of right or wrong. Multiverse theories are either scientific or not. I just explained why they are not scientific: Because they make an assumption that is unnecessary to explain any observation.

      Delete
    13. Sabine,

      So your disagreement with Sean Carroll simply boils down to you're right and he's wrong. How neat.

      Delete
    14. Andrew,

      You don't have to believe me. I have told you everything you need to know to make up your own mind. And, yes, it's as simple as that. Multiverse theories are trivially non-scientific. Why are you surprised that the scientists who work on it don't want to talk about it?

      Delete
    15. "Why are you surprised that the scientists who work on it don't want to talk about it? "

      Huh? We were discussing Sean Carroll. He's talking about it.

      Delete
    16. Sean Carroll is talking about the fact that physicists who work on the multiverse make a non-scientific assumption which is that they assume the existence of entities that cannot be observed? I must have missed that, could you please point me towards it? I read his books and his papers, but at least from those I get the impression that he, as well as everybody else working on the multiverse, entirely ignores the point. Which, as I said, should not surprise you, etc.

      Delete
    17. @Jeremy Jr. Thomas: in days of yore, there were people we might call "naturalists". They loved nothing better than going outside and looking, at nature: rocks, animals, plants, the seas, the sky ... They still exist, and many are also serious scientists. Occasionally something new and cool is found, such as the Wollemi Pine; otherwise the discoveries pile up in dusty museum drawers or arcane astronomy journals.

      Delete
    18. UFOs as manifestations of unexplained phenomena are real, as the reality of lightning sprites shows, Multiverse is just a speculative idea without any empirical evidence supporting it.

      It is really not surprising that some people consider Multiverse more "scientific" than UFOs since in today's practice of institutional delusion you can make a living talking about Multiverse, or entangled black holes, or post empirical science; some of these experts are in essence just "merchants of hype" milking the almost unlimited social gullibility.

      The advancement of science can be considered in many ways as the never-ending cycle of showing the "experts" of the moment(orthodoxy) wrong over and over again; and Reality always is there to provide the irreducible facts that laugh in the face of these experts far detached from Reality.

      Delete
    19. Sabine, don't you recall the link posted by Philip Thrift that started this thread?

      Delete
  2. In science, which is a continuous (strategic) process the starting points and prior limitations taken into account are not dogmata but reconsiderable considerations.

    ReplyDelete
  3. Shouldnt that be 5G instead of G5? I was scratching my head what you meant till I got to the paragraph about a mobile network.

    ReplyDelete
  4. I shared your video to my author blog as a thank you, because cool science inspires stories. Colour me a big fan.

    ReplyDelete
  5. Good article. I like paleontology and archeology as other examples of non-predictive science. They are pretty exclusively just cogent explanations of "Here's what happened and what existed." And (unlike pandemics) aren't used much as predictive lessons on what to do now. (Unless "don't get hit by an asteroid, it sucks" counts.)

    This is even more so than astronomy; which has a ton of non-predictive science in it, but is also predictive in terms of orbital mechanics, life cycles of stars, and the ultimate fate of the universe.

    Thanks for writing and hosting, as always.

    ReplyDelete
  6. @ Sabine,

    You write:

    "What scientists mean by “explanation” is that they have a model, which is a simplified description of the real world, and this model allows them to make statements about observations that agree with measurements and – here is the important bit – the model is simpler than just a collection of all available data."

    I don't believe that applies to quantum theory which has no underlying "real world"!

    ReplyDelete
    Replies
    1. The phrase "is a simplified description of the real world" is actually superfluous and you are welcome to ignore it. I only put it there so that people don't think I'm referring to fashion models.

      Delete
  7. This brings to mind the trite but often observed folk wisdom "Figures don't LIE but Lairs sure can figure"
    Obscuration with over simplified citation or irrelevant application of legitimate work is frequently the leaning post of pseudoscientific argumentation in many areas, in particular climate change comes to mind

    ReplyDelete
  8. I like to think of a theory as a sort of data compression algorithm. Information pertaining to the exterior world is run through this “machine” and reduced to within some output consistent with some small set of postulates. If some input does not data compress this means either the algorithm is wrong or the input data is wrong. If data is input into the machine and what comes out has the same level of complexity the compression does not yield a valid theoretical description. With computer programs for chaos, say the standard map or Henon-Heiles etc, if you run the output through a compression algorithm it does not compress much. The near complete randomness of the output is not compressible. However, your computer program of maybe 100 lines of code with input is a form of compression. So could in this limited model sense say the algorithm is the theory and the output are the results and there is a compression map.

    We may say that a theory predicts in the computation of this sort of output. Similarly, or conversely if the observed results of nature are compressed to fit into the theory, or the theory can reproduce these observed results within certain error criteria then we say theory is good. We might think of a theory as a sort of Kolmogorov complexity condition as the minimal symbolic complexity required to describe a certain set of data. This is an interesting concept, because Zurek demonstrated there is no probable method for computing what the optimal data compression algorithm is. Along these lines we have the Chaitin-Kolomogorov result the minimal condition is not computable. This means we may support a theory, it may explain things and give results consistent with observations, but we can never know any theory is optimal.

    There is then the fascinating topic of quantum gravitation, where one of the earliest theoretical results is the Bekenstein bound. This says the entropy of a black hole is S = kA/4ℓ_p^2 for ℓ_p^2 = Għ/c^3 and A = 4πr^2 = 16π(GM/c^2)^2. The black hole then has certain thermodynamic properties, which implies that if the entropy increases so too does the horizon area and the radius = 2×mass of the black hole. You cannot get more stuff into a black hole without increasing its entropy or size. This means the entropy of a black hole is the maximal entropy that can be placed within a surface area A. Entropy and complexity are related to each other, and in some ways equivalent. What this means is the black hole is a sort of Kolmogorov computation for the maximal amount of entropy that may fit within a screen of area A. We can’t get more in, not without increasing the area of the black hole. But wait a minute, if a black hole is this sort of complexity or theory machine how does it compute this optimum which is mathematically impossible?

    In point of fact, it does not really do this. The Bekenstein bound is in a quantum setting

    S = kA/4ℓ_p^2 + ⟨O(z)O(z’)⟩,

    For the last part a quantum correction given by a two-point function or two observables etc. The quantum error correction (QEC) is then an estimate of the deviation from this compression. The black hole is a sort of quantum error correction code for field in and then scattered out, say Hawking radiation, but it is not computable to determine whether this QEC code is computing the output with perfect fidelity. The quantum uncertainties are a physical form for how the Kolmogorov complexity is not generally computable.

    ReplyDelete
    Replies
    1. Lawrence: I agree that theories are data compression mechanisms; all useful explanations are data compression mechanisms.

      Statistics in general is; "correlation" is actually a measure of reduction in variance explained by fitting a regression line. Similarly one can fit non-linear and multi-dimensional equations to observations and measure the fitness vs number of variables to determine the explanatory power in terms of variance reduction.

      In the earliest simple cases, using a parabola to fit the arc of trajectories of thrown spheres, arrows, etc, one finds the variance reduction so complete that the square laws governing gravity jump out. It is not a leap from that clue to discover the rules of acceleration, circular or elliptical orbits, the gravitational constant on Earth, and so on.

      Other such variance reduction techniques are just statistics in general; sorting instances and finding a fitting curve gives us a distribution which may not reduce the variance of the randomized data, but massively reduces the variance of the sorted data, thus giving us more explanatory power; the unsorted data isn't just a random process, it is randomly selecting instances from a non-random distribution which is a better explanation.

      I don't know how to assign an information value to something like Darwin's theory of evolution; but somehow the information in the book clearly reduces the variance of the observations dramatically.

      The simple cases of relative information content are quite clear; if assumption B can be discarded without any effect, then B can be eliminated.

      The less simple cases of relative information content are less clear. If A explains 99% of the variance, and (A,B) explains 99.1% of the variance, how do we compute if the information contained in "B" (which adds to the weight of the explanation) is worth the extra 0.1% of variance reduction?

      Thus far, in science, this seems to be more of a heuristic judgment call than a mathematical call.

      But still, scientific theories are compressions of knowledge in the sense they massively reduce the raw variance of the observations. ("raw" variance being just the variances within the raw data to be explained.)

      I see the "explanation" of data as a reduction of raw variance, and "better" explanations as relying on fewer claims or parameters.

      But other than roughly, I don't know how to quantify it. I imagine this is Information Theory; at least that's how I've approached it for numerical information. Expanding that to less numerical domains like evolution, psychology, history, emotions, etc is too challenging for me!

      Delete
    2. The classic skit Monty Python did with science is in Search For the Holy Grail in the witch burning when Arthur inquires about knowledge of science. I guess what is science in one age can differ from another. A 19th century physicist might think we lost our minds with quantum mechanics.

      We might think of black holes as quantum state accounting programs that store such information and transmit it out in encrypted form. It does so with an extremum of Kolmogorov complexity. However, there is an incompleteness, which is hidden so to speak by quantum uncertainty.

      Delete
    3. A theory defined as "a sort of data compression algorithm" with input data from Nature and output a "program" leads to a form of programming-theoretic persectivism for science, where the program could be in any one of a variety of (high-level) programming languages X, Y, Z, ..., each one with their own denotational and operational semantics.

      Delete
    4. There is a convergence of sorts between mathematics, physics and computer science. The identification of quantum states with quantized information, mathematical symmetries with transformation principles for states, and computation theory with q-bit math-theory is setting up a sort of paradigm.

      Delete
    5. Dr. S:

      To this engineer, "theory" is not "data compression." Theory is boundary specification. It begins with selection from the vast mathematical descriptions of the physical world of simplified mathematical descriptions, e.g., sets of linear, first order equations. This is one example of a simplified boundary, that can be converted to useful mathematical tools; for example, through transform calculus analysis, with error assumptions.

      If the result smokes and smells bad, we work on improving boundaries and error assumptions, and often "expanding" data.

      Best of karma to you all.

      Bert

      Delete
  9. You can do a “stump the chumps” with flat-earthers. Apologies to click and clack. If Earth were a flat surface astronomical objects would somehow move around the sky. Now the moon is one body we can see features with the naked eye. So, if Earth were flat a number of strange things would happen. If the moon remains a constant height above the Earth the limb would shift a bit with perspective. In the moon were a disk, its limb would then appear more elliptical at times instead of circular. If the moon were attached to a dome that covered the Earth, then those near the boundary, often depicted as in the southern hemisphere, might be closer at times and see it appear larger. No appeal to gravity or anything even Newtonian is required.

    ReplyDelete
  10. "We see this in the foundations of physics most prominently with string theory. String theory, if it would work as advertised, could be good science. But string theorists never seem to get to the point where the idea would actually be useful." does string theory predictions general theory of relativity?

    ReplyDelete
    Replies
    1. "With that you get a theory that is considerably more complicated than general relativity, but that does not explain the data any better." string theory is a theory of quantum gravity a theory that goes beyond general relativity where the idea would actually be useful the quantum

      Delete
  11. There are a lot of things that cannot be studied because it would be unethical or people are just hiding data. Like we cannot go into the labs in Wuhan to read their paperwork. Evolution has no such limitations. If a certain trait or food is good for survival, more people will use it eventually. I was watching a goose attack my car the other day, while its goslings were running to safety. It suddenly dawned on me why animals are aggressive and why it is a conserved trait. It helps survival of species, when one protects family. How would one even design a controlled experiment to determine if violence is good or bad. Science is too limited. This is why I go with tradition of having police and armies. These societies survive the evolutionary challenge.

    ReplyDelete
    Replies
    1. There may be wastewater samples from the urban areas around the Wuhan lab from October, November, and December 2019 that can provide a timeline of Covid 19/SARS2 infection in the region. That is, unless the CCP destroyed them.

      Delete
  12. A problem here is semantic. The word "theory" can mean anything from an established law (theory of relativity) to unfounded speculation (nicotine prevents covid-19). The public confuses the two. "Evolution is only a theory. just like creationism and thus no better than creationism." Scientists cause this problem by using loose language. Ideally, for public consumption, they should use labels that signify how solid they think a "theory" is - conjecture, plausible model, law, etc.

    ReplyDelete
    Replies
    1. IMHO the public has no idea what words like "conjecture" or "plausible model" mean in this context, and explaining that theories have varying degrees of solidity only worsens things. I suggest just dropping the word "theory" in reference to established science with strong academic consensus. In other words, just refer to evolution as evolution, and act surprised when a politician refers to it as a theory.

      Delete
    2. @jim_h

      Then one might wonder why students spend years of their life getting degrees in the established science of evolutionary biology. Just quote Wikipedia, "Evolution" is also another name for evolutionary biology, the subfield of biology concerned with studying evolutionary processes that produced the diversity of life on Earth.
      (https://en.wikipedia.org/wiki/Outline_of_evolution)

      And act surprised when your PhD defense asks you any questions. Tell them they are idiots and the entire thing is established science. 100% of people who did this received their PhD!

      Delete

  13. The evolution of social consciousness is more complicated than it seems; in antiquity knowledge about flora and fauna served as the basis for the religious worldview, then astronomical knowledge and agriculture entered that worldview; also social psychology and human behavior entered that worldview that led to current morality. If I believed in God, the Big Bang would serve me as a point of creation, the quantum entanglement would serve me for an omnipresent God, the decoherence for a God who knows it all and your sins leave traces, with the multiverse I would have my Heaven, Limbo and Hell, hahahaha; but I'm an atheist, I'm interested in the phenomenon from a human point of view

    ReplyDelete
  14. I find this confusing. You write:

    A) this model allows them to make statements about observations that B) agree with measurements and – here is the important bit – C) the model is simpler than just a collection of all available data.

    I believe that the two; model and data are not commensurate, that is, they operate in particular, and separate spheres. Data = phenomena. Model = the development of an explanatory schema applied to the data available. How can you compare the two on a simple - complex axis?

    ReplyDelete
    Replies
    1. The model needs to reproduce the data. Ie: Nature gives you data. The model also gives you data. You calculate how much input you need to produce data with your model that agree (to some precision) with the data you get from observation.

      Delete
    2. Thanks, Sabine, that's helpful. How would you apply that ti social science models, such as, for example, constructivism learning theory? I'm neither a physicist nor a social scientist, by the way - I can see from your example how you could front/backtest a quantitative model, as from many of the comments here, but it's not clear to me how this works in other domains which are not quite as amenable to numeric analysis.

      Delete
    3. Dear Sabine,

      You write: "The model needs to reproduce the data." Do you think Darwinian evolution satisfies this criterion? What is the model, what is the input, and what is the data that the model can reproduce to some precision?

      Delete
    4. Tamas:

      The model is genetic mutation and selection by the environment. The data are fossils. As I have said, I think it is possible to quantify the fit to data, though arguably no one has done that.

      Delete
    5. Sabine,

      If the model is selection by the environment, then the input of the model has to include the environment, or at least some aspects of the environment, right? I think the problem is that currently no one can tell you exactly which aspects of the environment are needed, so it is impossible to "calculate how much input you need to produce the data". It might turn out that the input (relevant aspects of the environment) is actually more complex than the data (fossils). Of course, I agree that evolution is a good scientific theory, I'm just not sure it satisfies your definition.

      Delete
    6. Tamas,

      You need to know details to get detailed output of the model. But that doesn't mean that if you lack details the models doesn't tell you anything. As I said, evolution is a good model because it tells you, among other things, that species don't just spontaneously appear out of nothing, they evolve gradually. This fits very well with the data, even if no one has yet quantified how well it fits. Fwiw, I don't think it's all that difficult, it's just that one wouldn't learn much new from it.

      Delete
  15. One example given here of a pseudoscientific claim - the idea that SARS-CoV-2 was "engineered by the vaccine industry" - is not one that I would wish to defend. I would agree that there are some crazy claims circulating among the public regarding the Covid-19 pandemic, so it was appropriate to formulate an example in this category. But this particular example is also unfortunate, as it smacks of the way the legitimate hypothesis of a lab-leak origin of SARS-CoV-2 would be portrayed by someone illegitimately wanting to squash the subject.

    Most of what I think I understand about this subject comes from a number of Dr. Chris Martensen's reports on the Peak Prosperity Youtube channel (beginning with the one on May 1), from the excellent interview and follow-up Q & A that Bret Weinstein (a bat biologist) did with researcher Yuri Deigin on the Dark Horse podcast, and from Deigin's Medium article ("Lab-Made?").

    The Covid-19 pandemic emerged in Wuhan, a city that hosts one of the two leading labs in the world in the field of bat coronaviruses. On the other hand, the bats in question live in a different province, 2000 km away, and they are not part of the local cuisine.

    The BSL-4 lab in question was conducting gain-of-function research, such research being defined by the NIH as being for the purpose of making such viruses more transmissible or pathogenic.

    The wording of an NIH grant which partially funded this research in Wuhan states that the research will utilize "infectious clone technologies" to study Potentially Pathogenic Pathogens, and then study them with "in vitro and in vivo infection experiments" to assess "spillover potential". The specific focus is on the "S protein sequence data", i.e. RNA of the spikes of bat coronaviruses, known to researchers as their receptor binding domains.

    Incidentally, the principal recipient of this grant is Peter Daszak - perhaps the most frequently cited individual in media outlets to dismiss any suggestion of a lab-origin as "misinformation". But his omission of the fact that he was involved with gain-of-function research suggests dissimulation of the first order on a matter of great public interest.

    To my understanding, the spike protein (receptor binding domain) of SARS-CoV-2 in general is most homologous to a particular bat coronavirus, but a subsection of the RBD, called the receptor binding motif (RBM) is virtually identical to that of a strain found in pangolins - which do not share a habitat with the bats in question.

    Then there's the anomalous furin cleavage site also on the spike protein - a 12 nucleotide sequence not found in any near relative of SARS-CoV-2. In fact, furin cleavage sites are not found in the RBDs of any coronavirus with more than 38 percent homology with SARS-CoV-2. Deigin documents research reports in which furin sites have been added to viruses by American, Japanese, Dutch and Chinese researchers in recent years.

    Finally, SARS-CoV-2 has been found to bind with the human ACE2 receptor more efficiently than with the ACE2 receptors even of bats. This is contrary to the normal pattern in which binding in the new host is less efficient.

    These seem to me to be some of the more important reasons for taking the lab-leak hypothesis seriously, and for regarding statements from virologists denying such a possibility with a large dose of skepticism. I urge people to consider the sources I've referred to above, particularly the very detailed discussions between Weinstein and Deigin, before concluding that a natural origin is most likely. I think folks will find their arguments perfectly reasonable.

    ReplyDelete
    Replies
    1. And what about the smoke above the fence on the grassy knoll, and the man with his umbrella up on a sunny day....

      Of course, one can construct a narrative for a lab leak, but is there any hard evidence? This is just a conspiracy theory - provide lots of apparent coincidences none of which are hard evidence, and suddenly the "evidence" is overwhelming.

      You can construct a narrative with a thousand coincidences like this in which you present the coincidence as supporting the conspiracy narrative, but without any hard evidence of a lab leak it counts for nowt.

      You have been duped.

      Delete
    2. We’re destroying the planet on every bloody level, and fucking the DNA of the planet is just par for the course. Homo so-called-sapiens wades in boots and all, with not a thought to consequences: it only took 50 years of plastic use to ruin the oceans. Homo so-called-sapiens would show more genuine sapience if they exercised more caution, and more respect for the life of the planet, for all we know it’s the only such living planet.

      Delete
    3. Considering that microorganisms are the foundation on which all other life rests, and has always rested, fucking the DNA of microorganisms is not a smart move. Fucking the DNA of microorganisms is widespread and routine all round the world. But Homo not-so-sapiens rules.

      Delete
    4. From his statement that I have been duped, one may assume that Steven Evans rejects my conclusion that there are grounds to consider the hypothesis of a lab leak seriously. He asks for hard evidence - as if "hard evidence" is the bar that must be reached before this particular hypothesis may be considered at all seriously, the similar requirement being waived for the alternative hypothesis of a jump from the wild. This is an interesting philosophy.

      Surely it is wrong to conclude that just because In general a particular horseshoe bat strain of CoV is closest to SARS-CoV-2, while a particular part of its spike protein, the receptor binding motif, is much closer to that from a pangolin strain, that ancestral strains and host sequences do not exist in the wild that would explain this conjunction on a natural basis.

      Nor does the absence of furin cleavage sites (which plays a crucial role in SARS-CoV-2's mechanism of cell entry and replication) in other beta coronaviruses, and its existence in SARS-CoV-2 as an insertion not as a mutation prove a non-natural origin.

      Nor may we exclude a possible natural origin on grounds that the horseshoe bats in question were hibernating when the outbreak occured, or that it occurred in a city more than 2000 km away from where they live and are not consumed as food there.

      However we do know that this city has two labs which conduct research on bat coronaviruses, one of which has been involved in work which investigates changes in the transmissibility and pathogenicity that can result from introduction of genetic changes using "infectious clone technology." We know that this kind of research carries with it risks that are acknowledged even by its proponents, and has been subject to controversy and temporary moratoria in the US. We know that numerous unintentional lab leaks have occurred in the past. We know that various labs around the world have experimented with adding furin sites to viruses.

      More specifically, we know that the particular lab in question was involved in carrying out modifications to the spike proteins of coronaviruses, including specifically modifications to the receptor binding domains/motifs, including creating of spike protein "chimeras", and with testing these with ACE2 receptors from humans and other species.

      For example, we read in the introduction to a 2007 paper by Shi Zhengli and colleagues ("Difference in receptor usage ...", J. Virol. - see Deigin's article for link):

      "Third, the chimeric S covering the previously defined receptor-binding domain gained its ability to enter cells via human ACE2, albeit with different efficiencies for different constructs. Fourth, a minimal insert region (amino acids 310 to 518) was found to be sufficient to convert the SL-CoV S from non-ACE2 binding to human ACE2 binding, indicating that the SL-CoV S is largely compatible with SARS-CoV S protein both in structure and in function."

      Amino acids 310 to 518 refers to the receptor binding domain. But further on in the paper they discuss chimeras in which the receptor binding motifs have been replaced by RBMs of different origin. E.g. "CS_424-494", where "C" means "chimera", "S" refers to the spike protein, and amino acids 424-494 is the RBM.

      Is it worthy of note in this connection that SARS-CoV-2 genome is generally closest to RATG13 among known strains, but a near perfect match (on an amino acid basis) with the RBM from a pangolin strain? Or is that just a bridge too far, symptomatic of the tinfoil hatters to "see" connections everywhere?

      I agree with the sentiment expressed by Lorraine Ford in her second reply. There does need to be debate on the great risks versus potential benefits of gain-of-function research. I would further note that there might be parallels with the issues Sabine discusses regarding the funding big projects in particle physics. But here the stakes are much higher.








      Delete
    5. Steven Athearn6:20 AM, June 25, 2020

      What a tiresome, crazy conspiracy theorist, you are.

      The CIA would have considered the possibility of a lab leak immediately. If there were currently any hard evidence of a lab leak, Trump would be stood on top of his golden tower shouting about it.

      "In general a particular horseshoe bat strain of CoV is closest to SARS-CoV-2, while a particular part of its spike protein, the receptor binding motif, is much closer to that from a pangolin strain, "

      Crazy Assumption. You mean among *known* bat and pangolin strains. There are *countless* unknown strains.

      "that ancestral strains and host sequences do not exist in the wild that would explain this conjunction on a natural basis. "

      Crazy Assumption. You mean, are not *known* to exist. Do you really think every strain in the wild is known? How can you be this crazy?

      "the absence of furin cleavage sites in other beta coronaviruses,"

      Crazy Assumption. You mean, in other *known* coronaviruses. There are countless unknown strains. Again, you are so crazy you think every virus strain in the wild is known. Have you any idea of the number of possible permutations?

      "and its existence in SARS-CoV-2 as an insertion not as a mutation prove a non-natural origin. "

      Crazy Assumption. You can't distinguish an insertion from a mutation. How can you possibly know that it didn't mutate in the wild? Absolute conspiratorial garbage.

      "were hibernating when the outbreak occured"
      Easier to catch for food.

      "this city has two labs which conduct research on bat coronaviruses"
      Right, crazy, but we don't know there was a lab leak, because you have provided no hard evidence, just crazy nonsense.

      " and are not consumed as food there. "
      So you know every food sold in the markets in Wuhan? Amazing.

      " in a city more than 2000 km away "
      So you've never heard of **transport**, is that what you are saying?

      "We know that numerous unintentional lab leaks have occurred in the past. "
      And therefore a leak must have occurred here? Crazy.

      " we know that the particular lab in question was involved in ..testing these with ACE2 receptors from humans and other species. "
      So we *know* that this lab was building exactly the SARS-CoV-2 virus? Interesting how you know that, crazy, given you provide no hard evidence.

      "we read in the introduction to a 2007 paper"
      And it's now 2020.

      "was found to be sufficient to convert the SL-CoV S from non-ACE2 binding to human ACE2 binding"

      Omg, they did it. You're right. Except this was 2007, and you have no evidence that SARS-CoV-2 was made or leaked from the lab in 2019. We know the labs research viruses. That's not evidence of a leak.

      "SARS-CoV-2 genome is generally closest to RATG13 among known strains,"

      *Known* strains - correct. There are countless unknown strains.

      " but a near perfect match (on an amino acid basis) with the RBM from a pangolin strain?"
      Which came about *in nature*.

      " Or is that just a bridge too far, symptomatic of the tinfoil hatters to "see" connections everywhere?"

      Exactly. You have decided what your narrative is and presented biased information to suit the narrative. You conveniently ignore the fact that you nor anybody else has any idea what all the strains are that exist in the wild - no idea. The permutations are countless. You have no idea what all the foods are that are sold in the markets in Wuhan. And you have no hard evidence of a leak from the lab.

      It is a classic, evidence-free conspiracy theory. You are completely nuts.

      You are a disgrace to the name "Steven". Please change your name. At least change the spelling to "ph".

      Delete
    6. Apparently, we DON’T need an international investigation into the source of the coronavirus, because Steven Evans can tell any investigators everything they need to know. What a marvel he is.

      Delete
    7. Lorraine Ford9:34 PM, June 28, 2020

      Can you read?
      I said there is no hard evidence currently of a lab leak, and that Steven Athearn's waffle is a conspiracy theory. I don't appear to have written that there is no need for an investigation.

      Delete
    8. Steven Evans would do well to read what I actually wrote instead of launching an unhinged attack on what, according to his preconceptions and fevered imagination, I must be saying. Crucially, I took pains to not deny the possibility of a natural origin, and I specifically pointed to the possible existence of currently unknown CoV strains and host sequences that would explain CoV2's origin on a natural basis. Leaving out the words with which I begin that sentence he then charges me with assuming the opposite and goes on to make this the centerpiece of his fantasized case against me.

      In the context of stressing that I would not want to rule out a natural origin, I raised several issues which did, however, point to the need for proponents of the natural origin hypothesis to meet some objections. Basically this was intended as a response to Evans' odd evidentiary standard, demanding "hard evidence" for one hypothesis while not recognizing any evidentiary burden for the other.

      Nowhere in his "response" does he refer to the actual conclusion which I am advocating and which I specifically challenged him to address - namely, as clearly reiterated in the first sentence of my first reply, "that there are grounds to consider the hypothesis of a lab leak seriously." Instead, he quotes fragments of the evidence I cited for that conclusion, and then uses these as launching points for his own non-sequiturs. He seems to recognize no middle ground between a hypothesis that is "evidence-free" and one for which there is hard evidence that it is correct.

      In his butchered rendition, the specific evidence I cited - that this specific lab is known to have created multiple chimeric coronavirus spike proteins in which one very specific part of the RNA sequence is replaced by RNA from a different source, together with the fact that in CoV2 the precise same part of the spike protein has the appearance of being such a chimera - does nothing at all to increase the plausibility of the lab leak hypothesis above that of the bare fact that "the labs research viruses."

      A couple of loose ends: yes, I am relying on reports in saying that bats were not sold at the market. And yes, there is a distinction between 'mutation' and 'insertion'. The former refers to changes in the nucleotide sequence that do not change the length of the sequence, whereas 'insertions' (and 'deletions') involve changes in the length of the sequence. The furin site in question is a twelve unit insert. Once again, that doesn't prove lab origin, but natural insertions involve different pathways than mutations and are evidently much rarer.

      I'll end with a couple of quotations and references:
      “The idea that it was just a totally natural occurrence is circumstantial. The evidence it leaked from the lab is circumstantial. Right now, the ledger on the side of it leaking from the lab is packed with bullet points and there’s almost nothing on the other side.” --"senior administration official" quoted by Josh Rogin, "State Department cables warned of safety issues at Wuhan lab studying bat coronaviruses", Washington Post, April 14, 2020.

      "We can't say that this is a lab leak. But we can say anybody who tells you for sure that it isn't is either very confused about the facts or not telling you what they know. And that, you know, you [Deigin] are not a virologist, but in a sense, that may be why you're able to play the role that you're playing. There's something about the incentives that surround mainstream virology at the moment has caused the entire field to line up behind a story that is incorrect - that lab-leak is not worth considering as a hypothesis." - Bret Weinstein, Dark Horse Podcast, June 8, 2020

      See also: "Did the SARS-CoV-2 virus arise from..." Bulletin of the Atomic Scientists, June 4, 2020.


      Delete
    9. Steven Athearn9:27 PM, June 29, 2020

      Instead of whining like a loser, why don't you read my post and think about it. I'm trying to teach you how to think properly so the tiny, tiny number of humans in the world capable of basic rational thought might nudge up.

      There is the possibility of a lab leak because there is a virus research lab in Wuhan. But none of what you wrote is hard evidence for the simple, simple reasons I gave you. You wrote a classic example of a conspiracy theory - lots and lots of irrelevant points that add up to no evidence. Try to understand this instead of whining.

      "We can't say that this is a lab leak."
      Exactly my position and that of anyone rational.

      People like you are the problem with the world - you don't understand basic logic and rational thought, and when you have your idiocy pointed out to you line by line, you just come up with more rubbish.

      Lose the ego and get a brain.

      Delete
  16. "And that, ultimately, is a decision that falls to those who fund that research."

    And who might that be? Quis custodiet ipsos custodes?

    ReplyDelete
  17. Speaking of pseudoscience, I wonder how Phillip Helbig's paper on universal fine-tuning is coming along...

    ReplyDelete
  18. Thomas Kuhn claimed that scientists look for five values when evaluating a scientific theory: predictive accuracy, consistency, breadth of scope, simplicity and (last but not least) fertility.

    Some philosophers use as an example of a fertile (but false) model, Wegener's floating-continent-model. It did lead to current widely accepted and fruitful plate tectonic model.

    In other words, a good model has a kind of metaphoric quality, it helps to understand some part of reality (the data?) that is not well understood before.

    Maybe fertility is what is missing from pseudoscience?


    ReplyDelete
  19. Re cranks in tinfoil hats:
    I think it should be pointed out that some physicists are cranks in the sense that they believe that computers/ AIs could become conscious. There is nothing MORE cranky or pseudoscience than this illogical belief in the magical powers of electrical circuits, especially when we already know that higher-level consciousness requires living things with bodies and brains.

    ReplyDelete
    Replies
    1. I agree part way or in some qualified manner. Yet we can ask what qualities do living things with organic molecules and brains with squishy neurons posses that makes them unique? Magical ideas if souls or nefesh in Judajsm are suspect.

      Delete
    2. Lawrence,

      There is nothing special about any particle, atom or molecule: it’s just the lawful and logical relationships they are in, and the interactions, that make the difference between a television and a tiger, or a computer and a brain-body. If living things have consciousness of their high-level relationship to the rest of the world, it can only be because particles, atoms and molecules have proto-consciousness of their lawful relationships and numbers. What IS suspect is the “miracles of emergence” idea.

      Secondly, brain-bodies process information but computers process symbols of information: there IS genuine information being processed in computers, but that’s just low-level voltage information. There’s a very big difference between information and symbols of information.

      Delete
    3. "especially when we already know that higher-level consciousness requires living things with bodies and brains."

      How do we know that? I would like to point out that we are the end product of billions of years of evolution which conceivably could have taken another path.

      Also, it is a very basic fact that even many positive examples do not prove anything. 1 is prime (I am stretching the definition here), 2 is prime, 3 is prime. Oh, obviously every number must be prime.

      There is no proof that higher-level consciousness requires bodies and brains. The fact that all higher-level consciousnesses we are aware of (and btw. we are probably talking about a sample size of one or maybe a few more if you count primates or dolphins or whatever suits your fancy) do have a body and a brain does not in any way proof this must be the case.

      Delete
    4. Lorraine: What you propose is panpsychism. If particles have a unit of consciousness this should then be a quantum number. Conway and Kochen in a related line showed that if quantum observers have free will then elementary particle have some unit of free will. The idea particles have some unit of consciousness would mean there is some "psy" quantum operator and corresponding observable. There is no evidence of this.

      Delete
    5. G. Bahle 4:50 AM, June 23, 2020,

      Consciousness is not a disembodied or useless thing: it is associated with organisms, and it has appropriate content that is useful to the organism. Basic higher-level consciousness is simply information about the surrounding environment derived from the brain’s analysis and collation of low-level information coming from the organism’s sensory system. The reason an organism needs this higher-level information is to (e.g.) move towards food, or move away from danger, using it’s body.

      Delete

    6. Hello Lorena, it is for that reason that when someone asks me if an AI can have conscience, I answer, yes, the day that the AI ​​can enjoy listening to music. Consciousness, as a human being's ability to interact with a representation of reality in his brain, to which he can manipulate and incorporate systems of ideas; all this is possible due to the development of emotionality in the human being; music brings out that tool purely.

      Delete
    7. Lawrence,
      No, you’ve got it wrong. I think the point about consciousness is that “the internal information/ knowledge/ experience dimension” (so to speak) is not accounted for by any “external” measurements. But the content of consciousness CAN be accounted for by the equations, variables and numbers; plus logical symbols like IF, AND, OR, THEN, and ELSE, which are required to obtain higher-level information like “tiger” or “food”.

      Delete
    8. Luis: the day that the AI ​​can enjoy listening to music.

      So deaf people are not conscious? Was Helen Keller conscious? She was deaf and blind; she could not enjoy music, or art, or movies. She grew up that way. Yet the proof of her consciousness lies in the fact that she DID learn to communicate.

      Emotions are not essential for consciousness at all, particularly for analytic thinking. We have studied people that have lost their amygdalae [the brain's clearing house for emotions] due to disease or injury; and they can be fully rational. Mathematicians can still solve differential equations, lawyers retain their knowledge of the law and how to apply it, chef's can still roast a chicken or bake a cake.

      Rational consciousness is neither magical or ephemeral, and does not need emotional input. Emotions DO serve a critical purpose in evolution, but they are separable from consciousness.

      And I find your notion of "the development of emotionality in the human being" both naive and potentially cruel. Humans are far from the first to develop emotions; nearly all animals (perhaps even flies) run on emotions. Mice have emotions, dogs have emotions, monkeys and chimps and gorillas have emotions, as do horses, elephants, chickens, swine and cattle. The evidence for all of that is wildly abundant and apparent. Emotions pretty much run their lives, what they are missing are rational abilities. Such as the ability to imagine long term consequences of their actions; the ability to form generalized models of their environment. Even chimpanzees, in experiments, seem incapable of taking an action in the morning to benefit themselves in the evening, though presented with that morning opportunity day after day for weeks.

      Consciousness is independent from emotion. They can be integrated and interact, certainly, but we can be conscious without it, and there is no reason to believe machines will not be conscious without it.

      In fact, as an AI researcher, I think we'd be stupid to incorporate emotion into AI. Consciousness is a very useful tool for making a focused search, but I don't want my AI to have any fear of death or sense of self-preservation, I don't want it to panic, or hate, or fall in love, I don't want it to get bored or irritated, to be vindictive or angry, to be fearful or cowed, disgusted or in despair. I want it to be a rational problem solver undistracted by emotion. It would be great if it understood emotions and knew how to deal with the emotions of people and other animals. With such an understanding it could simulate such emotions without feeling them, which is helpful in alleviating distress in us emotional animals. It would not be great if it actually felt them.

      (In fact the dumb flaw in most AI apocalypse movies, like Terminator, is people stupidly giving their AI emotions so that their AI then behaves irrationally. We're not going to do that.)

      Delete
    9. Dr AM Castaldo, thank for replied , I belive
      you have simplified emotionality to the most basic feelings; it is more complex than you think; without the emotional equivalent of symbols and words and ideas, without that equivalent no information would make sense; without an emotional equivalent in your brain of the general frameworks you would be conscienceless ; any word any symbol that ties it to the rest of consciousness is the emotional equivalent that it generates in the brain; When you hear a symphonic poem by Richard Strauss or read Thomas Mann's Doctor Faust, read visually or in Braille, both rationality and emotionality go hand in hand, when I said enjoy music, I was referring to a way of bringing it to light all that emotionality hidden in the process of thinking; And of course, you and I and all human beings share with animals, especially with mammals, the ability to evaluate reality through emotionality; it also calculates, or you believe that your consciousness has no animal basis. If there was new knowledge in the words or symbols; then the letters would be thrown at random until a logical result; But no, to building new ideas you have to involve visualizations, the imprint of our senses and there in that search, in that attempt to reproduce reality, emotionality enters.

      Delete
    10. Dr. A.M. Castaldo 6:34 AM, June 24, 2020,

      Computers/ AIs can NEVER be conscious because they only process symbols of information. Computers know nothing of these symbols that are literally hidden in the sets and patterns of the computer’s high and low voltages; and in any case these completely arbitrary manmade symbols represent information only from the point of view of human beings who know how the symbols should be interpreted. From the point of view of a computer/ AI (not that a computer/ AI is actually an entity with a point of view) what is happening inside the computer is high and low voltages without any pattern: computers/ AIs that, from our point of view, analyse patterns are not actually seeing patterns.

      (It’s actually more complex than that. As PhysicistDave explained:
      "Just by looking at the physical circuitry and physical voltages, you cannot tell if a given logic gate is a NAND gate or a NOR gate: it all depends on the intention of the design engineer: Is he intending the voltage levels to be interpreted as positive or negative logic? [1])

      On the other hand, snails can be assumed to have higher-level consciousness because they clearly experience genuine higher-level information about the world: “food” is higher-level information, a conclusion or deduction that comes from analysing law-of-nature-level information coming from their sense organs. Human beings use exactly the same process to deduce “carrot”, “food”, “tiger”, “danger”, and to deduce that they have encountered special patterns (i.e. symbols) e.g. letters, words, sentences, equations. This higher-level conscious information that living things experience is built out of logical and mathematical relationships constructed from the fundamental categories of information and numbers coming from the sense organs.

      1. “Usually, in digital circuit design, we take a high voltage to count as a “1” or as “TRUE” and the opposite for a low voltage. But there is nothing in nature that compels us to so interpret the voltages …"In the so-called positive logic system, the higher of the two voltages denotes logic-1, and the lower value denotes logic-0. In the negative logic system, the designations are the opposite."… Just by looking at the physical circuitry and physical voltages, you cannot tell if a given logic gate is a NAND gate or a NOR gate: it all depends on the intention of the design engineer: Is he intending the voltage levels to be interpreted as positive or negative logic?... The exact same physical structure, the exact same voltages, means something different in negative logic…”
      PhysicistDave 4:50 AM, May 09, 2020, http://backreaction.blogspot.com/2020/04/what-is-reductionism.html

      Delete
    11. Lorraine Ford says because they only process symbols of information

      So do humans. Our brains have no sight, no hearing, no touch, no smell, no taste, no pain. They are locked in a dark box, and all that gets into that box, besides the blood that carries in nutrients and carries out waste, are electrical signals carried on biological wires we call "nerves". That's it.

      You do not "see" anything, you interpret signals from eyes that can be replaced by machines (and we are making progress in building such machines) that stimulate optic nerves with electrical impulses.

      You do "feel" anything, you interpret electrical signals generated by the mechanical function of nerves, and if I block that signal you will feel nothing, and if I artificially stimulate that signal you will feel what is not there.

      The distinction you make is meaningless. There is no "meaning" accessible to animals and not machines.

      I know I won't convince you, Lorraine, so enjoy your fiction. It seems harmless enough.

      Delete
    12. Luis: it is more complex than you think; without the emotional equivalent of symbols and words and ideas, without that equivalent no information would make sense;

      This is just baseless assertion, and untrue as well.

      I don't know what you mean by "it also calculates", but I do think consciousness, rationality and emotion (three distinct things) exist on a spectrum and the only difference in humans is a matter of degree.

      I've always kept dogs; they have consciousness, rationality and emotion, their consciousness and strong emotions are the primary reason people keep dogs as pets. But they also exhibit flashes of basic rationality, basic reasoning, that I find clear, puzzle solving that I do not think can be attributed to "instinct".

      Yes, poetry and music do trigger emotions, but so what? The brain is an electro-chemical machine, it can be turned off by something as simple as xenon gas, and turned back on by the removal of it.

      You cannot just assert that consciousness requires emotion, and in fact there are experiments that prove otherwise. The clearing house in the brain for emotions is called the amygdalae (two of them) in the center of the brain. These have been lost to brain tumors, other disease, surgery, accident, poison, drug interactions, etc, in various people. These people have been studied.

      They don't feel emotions. Any of them. But they are still rational and conscious. Mathematicians can still solve differential equations, Lawyers can still cite laws and perform legal analysis, like pointing out problems with contracts.

      They are not without disability, they have excessive difficulty (even inability) in making arbitrary choices that are not subject to rational analysis. Like what they want for dinner, or what outfit they would like to wear for the day. But their consciousness and rationality are NOT impaired, they remain intact.

      Your premise is just wrong. It's false. The evidence we have is that consciousness and rationality can exist without emotions.

      If you insist otherwise, I have the same message for you as I gave Lorraine: Enjoy your fiction; it seems harmless enough.

      Delete
    13. "Just by looking at the physical circuitry and physical voltages, you cannot tell if a given logic gate is a NAND gate or a NOR gate: it all depends on the intention of the design engineer: Is he intending the voltage levels to be interpreted as positive or negative logic?"

      On the other side of the galaxy there is a world of humans who evolved independantly from us and developed a language which sounds just like English but in that language "yes" means "no" and every word in it means the opposite of what we mean in English.

      Assuming the quoted statement above proves in some unknown way that computers cannot be conscious, then the language of counter-English proves that humans cannot be conscious.

      After all, our brains are not processing actual information, but only symbols of it encoded as the chemical and EM signals by which neurons and synapses and nerve cells work. (Without which we could not think.)

      To be conscious of something means to be aware of it. Windows 7 seems to be aware of it when I click the Excel icon. (Most of the time, when the power is on and it has some voltages to work with.)

      In short, all "proofs" that computers, given enough processing power and training and self-programming and sensory mechanisms, cannot be conscious always sound to me as proofs that humans cannot be conscious. (And perhaps we aren't, but just think we are.)

      Delete
    14. There is no clear reason to think circuitry, whether artificial neural networks or more standard computing systems, can't be conscious. However, there is clearly no evidence of consciousness in computing machines. There are analogues between neural systems and digital circuits. Flip-flops are antagonist op-amps, and neural dendritic negative feedbacks. However, a brain at large is structured quite differently.

      There are two other possible big differences. The first is that biological neural systems have massive input/output. Your finger tips have 100,000 sensory neurons, which by comparison computers have few. Brains are far more structured as open systems. The second reason may be that neurons are themselves living. Nothing about ICs are living.

      There is no clear logical or mathematical reason why machines can't be conscious. Of course we have no clear definition of consciousness, which makes any pronouncement along these lines difficult and problematic.

      Delete
    15. Lawrence: I think the definitions are pretty squishy. For example, I think it is possible to be self aware without being conscious.

      For example, a self-driving car is very self-aware, it "knows" its own shape, speed, momentum and capabilities and is constantly monitoring them and the surroundings, modifying its behavior on a millisecond-by-millisecond basis (faster than humans can) to keep its passengers safe. Even at the expense of damaging itself.

      It seems difficult to define "self awareness" in a way that fits humans but does not fit a self-driving car; both require a sophisticated probabilistic internal model of one's self, and what one is capable of. The car "knows" with great precision how quickly it can come to a stop, where it can plausibly steer, and can even take weather (the current environment) into account for this awareness.

      Consciousness may not require self-awareness at all; that seems to be more of a recursive traversal of a tangled tree of models. Our "train of thought" jumps from one model, to another connected model, etc, ad infinitum (or until neural waste builds up and we have to sleep to flush it out, because neurons do not function correctly when pressured to expel waste.)

      Just daydreaming and following a random chain of thought is conscious behavior, but consciousness is also at least somewhat capable of being directed, e.g. to focus on solving a problem and avoiding tangents irrelevant to the problem at hand.

      But it doesn't seem to require any bodily self-awareness, at least. Locked in people that can't move or feel a thing can be conscious. (We know because mentally they can trigger brain activity on command, and use that ability to answer questions.)

      I agree we need much firmer definitions to distinguish mental functions, which may or may not overlap or even be intertwined.

      Delete
    16. So-called “AI researcher” Castaldo, JimV, Lawrence Crowell,

      I can see that you 3 really, REALLY don’t understand computers and how they work. You haven’t got the faintest clue what I (and PhysicistDave) are talking about. And you don’t understand that you don’t understand. Despite being complete ignoramuses about how computers work, you are certain that computers can be conscious. Let me advise you that until you educate yourselves on how computers actually work, you will continue to talk bullshit.

      Delete
    17. Thank for replied Dr Castaldo.
      Consciousness, Rationality and Emotionality are different things? well locate them separately in the brain to see if you can, the amygdala and hypothalamus work with neurons like the rest of the parts of the brain, and the plasticity of the brain makes that in case of accident some parts assume the functions of others; they control our most vital emotions; But this does not mean that the emotionality generated by thinking does not exist because it is less intense or vital. The brain does not work with defined files and definite digitization, it has a much greater degree of freedom than a computer and can associate the "dog" visualization with anything it wants and in different ways and quantities; they are small emotional states integrated into others and any of them can influence in producing more intense feelings because in the end they are of the same nature; for example, nobody has to feel upset because another person offends God or vice versa, if someone mentions him (supposedly something related to rationality) and according to your independent emotionality; however, some people react as if they have shaken all their rationality. Nor does it seem like a logical machine when it is capable of discriminating many, many variants by supposing them absurd; Unless the entire analysis system is modulated by some type of emotional state, some people call it aesthetics, beauty.

      Delete
    18. In contrast to the loads of research investigating how brains work, and research on quantum mechanics and other physics topics, there is no research investigating how computers work, because there is already complete knowledge and understanding of how computers are made to work. Castaldo, JimV and Crowell don’t possess this knowledge or understanding of how computers are made to work.

      In contrast to my (and PhysicistDave) talking about the actual mechanism of how computers are made to work: Castaldo indulges in flowery waffle and assertions about computers and minds; JimV is away with the fairies on the other side of the galaxy; and Crowell seems to think that it is a matter of circuitry, when it is actually a matter of the use of patterns and symbols.

      Delete
    19. It us bit clear to me that most computing systems have what is called self-awareness. There may be lots of data it stores about the state a system has, but self-awareness seems to imply some internal map of that. It is a form of self-reference.

      Delete
    20. So we’ve got 2 bona fide cranks, out in lalaland (Castaldo and JimV), who believe that computers/ AIs could be or are conscious; and one person (Lawrence Crowell) who sees physics, but hasn’t noticed that physics can’t explain patterns and symbols.

      Delete

    21. Independent Consciousness, Rationality and Emotionality in the brain nobody has seen it; an example: a knife, its figure, its edge, its polished surface, its metallic color and the sensation on contact, all of which is integrated into an emotional state; if the knife were made of paper it would feel different and it would be another type of emotion; the apple is round, juicy, tasty and fragrant, all of this is emotionally integrated. That digitized neural system does not exist, the brain does not build a line joining points, nor does it build a plane joining lines; and certainly the feeling that each material thing must have that something palpable has led many physicists to think that there is no objective reality behind quantum mechanics; see how emotional our rationality is

      Delete
    22. Lorraine Ford: you 3 really, REALLY don’t understand computers and how they work.

      How cute. I have a PhD in Computer Science, a lifetime 4.0 GPA, and I've worked on contract with four processor manufacturers on new processor designs years before they came to market; including one that just came to market at the end of last year. I have contributed to the hardware design of embedded system circuit boards. I can and have built all the logic gates using transistors and resistors alone. Using gates I have built most of the circuits in a computer.

      My PhD specialty is High Performance Computing, and my day job is working on tools for supercomputing, I have accounts on the fastest computer in the world.

      I think after 45 years in the field, I know something about how computers REALLY work.

      Like I said, enjoy your fiction, because that is all it is.



      I know how transistors work, and I can and have built

      Delete
    23. Luis: and the plasticity of the brain makes that in case of accident some parts assume the functions of others;

      No, not always. Destroy the visual cortex, you are blind forever, even if your eyes and the nerves leading to it are functional and intact. You might as well donate them; they will only atrophy without use. Nothing will develop to assume the functions of the visual cortex.

      Destroy the audio cortex, same thing. You are deaf. Forever. Even if your inner ear is functioning perfectly.

      Destroy the hippocampus and your memory is fucked up, forever. It may not work at all, some people have only a short term memory and that's it. Nothing develops to replace it.

      The Amygdalae are structurally specific like that; they cannot be replaced, plasticity of the brain is not infinite as you assume. You are wrong.

      And yes, Consciousness, Rationality and Emotionality are different things, they are logically separable, and observable without each other. We can see consciousness without rationality, Emotion without rationality, and conscious rationality without emotion; which is what we see going on in patients that have lost their amygdala, to cancer, to stroke, to bullet wounds, to poison, to surgery removing tumors and other such injury.

      Like I said, enjoy your fiction. It has nothing to do with reality.

      Delete
    24. Castaldo (PhD!),
      The reason you and JimV are bona fide cranks, out in lalaland, is because you haven’t yet noticed that symbols are man-made, entirely arbitrary things. Symbols only REPRESENT information from the point of view of human consciousness when human beings have pre-existing knowledge of how the symbols should be interpreted. You seem to make the mistake of thinking that symbols have a Platonically existing interpretation that computers/ AIs can somehow take advantage of. Having a PhD hasn’t prevented your being very stupid. After 45 years in the field, you DON’T know something very important about how computers REALLY work.

      Delete
    25. Lorraine: Sorry, insults won't work on me. Apparently you don't know how neurons work, either. Neurons are simple pattern matchers, taking inputs and when they match certain patterns, within some tolerance, they signal other neurons of the existence of the pattern they represent.

      True enough, many seem pre-programmed by genes, but that just means we have evolved to automatically recognize some persistent patterns in our environment; they are still pattern recognizers, most of them developed by experience.

      You are just invoking some form of magical information that only humans can grasp. It doesn't exist. It is a fantasy.

      Symbols represent an internal model of something. We learn to multiply, and in grade school 'x' is the symbol of that operation. Later we learn to use a dot or adjacency as the symbol for that operation. A raised number is an exponent, in signal processing we learn the same raised numbers are relative sample indices.

      Pattern recognizers (neurons) are models of something, beginning with sensory inputs, but organized (naturally and messily) into nested, recursive, cyclic graphs.

      So we can make models that contain models that contain models, ad infinitum.

      You are just wrong, there is no magic in "symbols". Insulting me might make you feel better, but it is not going to convince me or any rational mind.

      You don't know what you are talking about.

      Delete
    26. Castaldo: Sorry, your ignorance and irrationality won’t work on me. For a start, you don’t even know what symbols are. Symbols are used for communication; symbols are everyday things like: written and spoken letters, words, sentences, mathematical symbols and equations. They physically exist as squiggles on pieces of paper, as patterns within sound waves, and within sets and patterns of high and low voltages within computers.

      It’s clear that you can’t defend your belief that computers are conscious. I notice that you always avoid the issue, and start talking about neurons and living things instead. This is the actual issue:

      Symbols are man-made, entirely arbitrary things. Symbols only REPRESENT information from the point of view of human consciousness, and only when a person has pre-existing knowledge of how the particular symbols should be interpreted. Computers don’t know that sets of high and low voltages are symbols, and computers couldn’t interpret these symbols anyway. You seem to make the mistake of thinking that symbols have a Platonically existing interpretation that computers/ AIs somehow know about.

      I repeat: you and JimV are bona fide, illogical cranks from lalaland.

      Delete
    27. To start: I wrote, "It us bit clear to me that most computing systems have what is called self-awareness." When I meant to write, "It us bit UNCLEAR to me that most computing systems have what is called self-awareness."

      A lot of argumentation here involves the meaning of what a symbol is. Lorraine says there are no symbols in physics, though a quantum bit or qubit is in some ways an elementary form of a symbol. Quantum physics has some computational analogues, which means we can think of quantum states as symbols. Quantum numbers have also some features of being symbols or symbol-like.

      I am not sure whether AI can be developed into conscious entities. Even if they can it is also hard to know whether they are conscious, for there is a sort of Pinocchio problem of not being able to know what the subjective experience of anything, whether a living creature or some computing system, is.

      Delete
    28. Lawrence: How do we know other people are conscious? We've never met, I've never met Dr. Hossenfelder, yet I would wager a great deal you are both conscious beings, or at least were at the time of your writing.

      A symbol is a token that represents a model of something else. By that definition I don't see how there are any tokens in quantum physics, it is about the direct interaction of physical things.

      There are also no aircraft in quantum physics, studying aircraft with quantum physics is a fool's errand.

      To study "symbols" we require systems that utilize "models" of reality, so a token can be used to invoke that model. The word "dog" invokes a generalized model of a dog, a cross between an average of dogs, possible exceptions to the model (a dog can be missing a leg or tail or eye or ear), as well as links to specific dogs (all the dogs we've met or seen).

      Like an aircraft, a mental model must be built from many parts. Most of those parts can be useful in many models; e.g. the sub-models of a leg, ear, or eye.

      Models are an emergent thing that arises from neurons that are mechanistic pattern matchers, and themselves arise from simple biological sensors.

      The first "symbols" would be the electrical impulses from such sensors: a chemical gradient is not processed directly, it causes an electrical signal on a nerve, and that "token" triggers a response by the organism, and if that increases the probability of survival or reproduction, this relay is preserved by evolution.

      Symbols and aircraft obviously depend on quantum effects, and arise out of physics, but are emergent phenomena of evolution, like most stuff, including life itself, they are a consequence of the organization of large quantities of matter. Such machines (biological or otherwise) are not in the realm of application for quantum physics.

      Delete
    29. How do I know that other people are mentally conscious? I don't know. Though to assume the opposite and that I am the only conscious entity is solipsism and not a terribly helpful way of going through life.

      A quantum bit can be a representation for another quantum bit. This is what happens in measurement. Entanglements have a feature similar to this as well, where measurements are entanglements that are amplified onto the large. Further, with a quantum computer a program can be a model for something and quantum bits serve as the elementary symbols.

      Delete
    30. Sometimes to understand how a complex thing works, it helps to work with simplified models of it.

      That has been the experience I have had in working with computers, from the CDC mainframe at Michigan State, to the GE 600 mainframe and GE-400 Process Control computers at General Electric, to the Apple II microcomputer, to Unix workstations with RISC processors, and so on. To the point where all the neurological studies I have read of, by Dimascio and Sacks and others, make sense to me, as logical extensions of the way digital computers work.

      Summary: it is my understanding of how computers work which gives me some intuitions as to how brains can work.

      Practical application thereof: I wanted to learn the capability of singing a melody while playing in harmony or counter-melody on a guitar. It occurred to me that practicing in a dark room with my eyes closed might free up some neuronal activity (from visual processing) to speed up the neuron-training process. So I did, and it works. Watch closely and you may see some players close their eyes during difficult passages at musical performances.

      (Slight quibble with Dr. Castaldo on the ability to retrain visual-neurons. There have been experiments which show it is possible, e.g., blindfold a grad student for months and they will learn to process Braille with their fingertips much more quickly and parts of their visual cortex will light up in MRI scans as they do. However, I agree the complex image-processing work done by the visual cortex is too huge a task to be re-trained with a different set of neurons, which was the main point.)

      Another example of the explanatory power: as alluded to, the visual cortex can be identified as a distinct, contiguous region of the brain, by MRI scans during visual tasks such as reading. Just as one allocates a contiguous block of memory in a computer for a particular task for maximum efficiency, so has the brain allocated a block of neurons. And just as that block of memory might be rewritten and used by another application, neurons can be retrained. However, a computer that loses a large amount of memory and/or processors cannot perform all the same functions it used to, nor can a brain which loses a large number of neurons.

      I have yet to hear any explanation of how brains work from those who claim it is obviously different than the way computers work, other than a vague claim that it is some sort of magic not based on known physics. Until such time as an alternate mechanism is explained and confirmed by experiment, I will stick to the explanation that has worked well for me. I am sorry to hear that this approach, which is as well-reasoned as I am capable of, and has lots of empirical backing as I have cited, qualifies me to be considered a crank. A crank is usually considered to be one who obsessively sticks to a personal conviction without any experimental evidence, dismissing all contrary evidence, and raises the issue repeatedly in forums in which other matters are under discussion. I will try to restrain such behavior if I find myself doing so, and apologize if that seems to others to be a fair description of my past behavior. (Note however that such behavior is often a sign of some mental dysfunction and therefore cannot always be recognized or restrained by its originator.)

      Delete
    31. Lawrence,

      Binary digits and qubits don’t really exist. Binary digits and qubits only exist as concepts in the human mind, and these concepts can be implemented by various materials which have special suitable properties. So called “binary digits” and “qubits” are just materials with special properties that can implement the binary digit and qubit concepts.

      A qubit is not “an elementary form of a symbol”. Human beings started to use external physical symbols, to represent and communicate what was going on in their minds, way back in history. These external physical symbols are: 1) special patterns in the sound waves they made; and 2) special patterns in the markings they made on clay tablets, which can be interpreted by the viewer of the markings when they analyse the patterns in the light waves coming from the clay tablets.

      So called “binary digits” and “qubits” are advanced extensions of the use of external physical symbols by human beings to represent and communicate what is going on in their brains/ minds. But instead of just patterns in the sound waves, and symbols on clay tablets, paper, or screens, “binary digit” systems and “qubit” systems can be made to perform useful work for human beings.

      Re the idea that computers/ AIs could be conscious: Conscious of what exactly? As I have previously explained, computers can’t know that symbols are literally hidden in the sets and patterns of the computer’s high and low voltages, and in any case, these symbols (patterns) are man-made, entirely arbitrary, things which can’t be interpreted by the computer.

      Delete
    32. JimV: My PhD is in computer science. Your analogy is only correct to a limited extent.

      The brain is not so much a computer system, but more analogous to the circuitry of a processor. All the neurons are physically hard-wired together. The reason learning takes us so long, for example, is that it is in large part an actual physical process of growing neurons, rewiring the brain to do something new.

      Long term memory cannot simply be moved around, memory causes permanent physical changes in cells, more like burning a one-time-use ROM.

      Some of this wiring is hard-coded into our DNA, the visual cortex for vision is one of those. It is an extremely ordered system with a 3D matrix of millions of identical elements. So much so that it has been simulated pretty exactly on computers, and works, it has been used in research on how human vision works and principles discovered there extended into image processing, though I have not kept up with that research.

      That neural order only occurs by growth development in the womb; and if it is destroyed, there is no way for other neurons to take it over, or to grow it again. When it is gone, it is gone. The reason is that it is not "learned", so other neurons cannot learn it.

      Similarly true for the audio cortex, or hippocampus, the structure that manages memory. Also true for Broca's area, necessary for formulating sentences, or Wernicke's area that has motor neurons necessary for controlling muscles whilst speaking.

      Some parts of the brain are quite malleable, can be rebuilt, new things can be learned, new skills like typing, playing an instrument, reading or driving can become "muscle memory", if the demand to perform occurs frequently over an extended period of time.

      But "muscle memory" just means the brain dedicated neural circuitry to encode and streamline the necessary behavior, saving the energy of thinking about what one is doing. It is hard-wired and a biological growth process; it is not like code.

      We have many thousands of such hard-wired learning structures, beginning back when we were learning to walk. Riding a bike. Any learned skill you no longer have to think about doing (or indeed, thinking about doing it gets in the way of doing it.)

      If a tumor, stroke or blow to the head kills those dedicated neurons, the facility a person had with that skill will be lost. Eventually it may be re-learned, as a new skill. That said, The dedicated parts of the brain encoded in DNA and built in the womb will not regrow if destroyed. The environment for creating them is lost.

      The brain is not a computer in the sense of running some code stored in memory. It is circuitry, more like the processor. More like the computer memory; which is obviously not a processor, just a collection of circuits that responds to stimuli (address, read, write, data signals on its pins) as those change.

      Inside the circuitry of the processor, if the ALU is damaged, it won't work. There is no code to fix, nothing else in the processor can be re-purposed or retrained to compare two registers. The instruction pipeline architecture, the floating point units, the various levels and types of caches are like that too. If the circuitry gets damaged by gamma rays, that's it. Game over.

      You cannot carry the analogy too far. The brain is NOT like a computer system running code. We can simulate that, obviously, following a recipe to bake a cake or solve a differential equation. We have a small short term memory roughly analogous to RAM, but the "code" of our brain is in hard-wired dendrites (input ports), axon (output channel) and axon terminals (parallel output ports of the same signal) of the neurons. These grow and weave together, and unlike RAM, that physical organization of millions of parts is not easily copied anywhere else in the brain. It can't be "downloaded." Re-learning a function may regrow a similar structure, but not identical: It is influenced by different learning experiences than the original.

      Delete
    33. I would say from the Shannon formula, Landauer's principle and the equivalency of qubits with quantum states that you are wrong about this. Information is not a purely subjective projection from the human mind.

      Can AI be really conscious? My answer is a resounding, "I don't know." I really see nothing fundamental that obstructs the possibility. On the other hand, since we really have no firm definition of just what consciousness I see no road map to making any computing or AI system conscious.

      Delete
    34. Lawrence: I guess my point in that question was that you DO know other people are conscious, by their actions and communications you can tell that, if they are remotely like you, they must be thinking and responding in ways that demand conscious attention.

      Even for unemotional topics like mathematics or solving a difficult equation. You know they are thinking.

      I suppose I am suggesting something similar to a Turing Test, when you know the system is not rigged to deceive others into believing it is conscious, and the only explanation you have for its behavior, communications and ideas is that it is conscious and thinking, then it, like other people, is conscious.

      Like all science, it would have to be something experts can examine in every detail, and something that demands clear explanation they can follow.

      But barring subterfuge by examination, we recognize consciousness when there is no other plausible explanation. I know my dog is conscious, because if I tell her to find her ball she will try for a few minutes, be happy if she finds it, but if she can't she gives up. If she finds it in an inaccessible place, like under the couch, she comes to low-growl at me (her self-devised signal for me to follow) and then points to where the ball is. So I can retrieve it, which again makes her happy, it means we can play. I did not intentionally teach her anything except the word "ball" and the command "get it." IMO conscious thinking is the only plausible explanation of her behavior. Ditto for corvids that can solve puzzles requiring 7 distinct steps be done in a specific order to get a treat. Unlike a computer they are given no instructions or description of the puzzle, no new training or code is given, the puzzle is placed in their cage and they set about examining it and figuring out what to do first and presumably why it must be done first. I have no plausible explanation other than conscious problem solving.

      Delete
    35. Knowing that other people are conscious is a projection. I can say I know another person, whether somebody in my family or anyone I pass on the sidewalk because I see them on the exterior as much the same as I am and I can imagine in a sense being them. I have 3 dogs and I agree they have a domain of mental awareness, which I do not think is as extensive as human mental conscious landscape. I would say that in some sense animals are biological I can well enough imagine them as having a beingness.

      Machines are a bit different. Certainly my car is not a conscious entity. I doubt any of my computers, whether this little laptop I am working on here or my much more powerful workstation, have consciousness. Then scale up to a supercomputer and so forth, and from what I can see there is not much reason to think they have consciousness.

      The AI community with neural networks and so called deep learning have shown new flexibility in computing machinery. Artificial neural network were a hot topic in the late 1980s into the 90s, and the subject receded in popularity. The learning these networks supposedly took in often turned out to be other than wanted. The methodology has emerged again. If such as system is large enough and if there is a massive amount of input data, say it is a highly open system, it will indeed become difficult to determine if this sort of system is or is not conscious.

      It also has to be mentioned there is a trend in AI to make these systems more cleverly emulate a human. This is found more with automated voice on telephones, where in recent years they have reached a level of sophistication that can fool one. On a couple of occasions I have had a sort of gestalt flip where I realize a voice on the other end of a phone is indeed a cyber. When I have that shift of awareness does that in some way serve to tell me this system is not at all conscious? If future systems of that sort are so sophisticated, but still just more of what currently exists, that nobody can determine if they are cybers does that make them conscious? Does a Turing test really serve to tell us if something is conscious?

      These are difficult questions to answer clearly. A part of the problem is we do not have a definition of consciousness or any scientific theory that says, IF A, B, C, ..., then X is conscious. The flip side of this is of course there is no clear scientific reason to think there is some obstruction with silicon, digital systems or electronic networks that prevents consciousness.

      We have a sense of other beings being conscious is a subjective judgment. We have no clear objective basis to say anything outside of ourselves is conscious. Most of us just default an assumption that other people, some extend this to animals, are conscious or have some domain of self-awareness.

      I will also say that I think this problem has some connection to questions such as whether P = NP and whether consciousness is some oracle system that is deterministic or non-deterministic. We really have a huge frontier of ignorance here.

      Delete
    36. Lawrence,

      Re “the equivalency of qubits with quantum states”:

      You’ve made a very basic mistake there. Quantum states (in a quantum computer) might be said to genuinely exist, but the “qubit” is a man-made idea which only exists in the brain/mind, and this idea is not the same as the actual quantum state (in a quantum computer).

      Also, the symbolic representation (written or spoken words and other symbols, including symbolic representations using voltages within computers) of the qubit idea that exists in the brain/mind is not the same as the qubit idea that exists in the brain/mind.

      Also, the symbolic representation (written or spoken words and other symbols, including symbolic representations using voltages within computers) of the qubit idea is not the same as the actual quantum state (in a quantum computer).

      Delete
    37. Lawrence: I work on those systems; including Summit, the fastest supercomputer in the world, housed in Oak Ridge.

      Neural nets are still the AI of choice; they have just been modified with a new approach called "deep learning", which is more of a staged training protocol, not a change in the basic theory. As a math guy, you should recognize neural nets as pretty much just a clever form of gradient descent. In and of itself I would not call this any level of "consciousness"; linear algebra is not conscious. Including in humans.

      To illustrate the difference, one recently successful implementation of deep learning is AlphaZero Go, the current world champion of the game of Go. But AZG trained by playing 29 million games, with an average of 150 moves per game, that's over 4 billion plays.

      A typical human grand master, with consciousness, plays a few thousand games, and unlike AZG can develop principles and rules from a single example.

      AZG is not conscious, it is building a huge statistical model and playing margins.

      That illustrates the role of Consciousness; it manages our attention so we can build generalized mental models of how things work with a minimum of examples or information. I doubt it took you many examples of the rules to learn to differentiate simple algebraic equations in Calc I.

      I agree there is a spectrum of consciousness; but the basics are the same in mice, dogs and humans. It focuses their attention and lets them build, refine and consult mental models (which may only be roughly correct) with far less information than just consulting tables to classify a situation and then the statistical outcomes of thousands of previously experienced examples.

      There is a lot of evolutionary pressure on animals to learn their life lessons with as few examples as possible; that is pressure to generalize from even single examples. I doubt you needed a lot of examples to understand the problem with, for example, the fallacy of a false dichotomy.

      Consciousness is focus; and we can usually tell the difference between focused action (including in conversation) and automated action. The difference can be subtle, but like your phone conversation, if you momentarily stray off script (interrupt with "Just a second, my dog is going crazy") a human will adapt, where a computer will not. Your "gestalt flip" likely came at a moment when the voice on the phone failed to meet your expectations of a conscious human focused on the conversation. Thus the "that can't be a human" realization, from just one or a few accumulated examples in the course of the conversation.

      I know my dog is conscious in the same way; through her behavior and ability to focus and learn with very few examples. We can see it in mice and some birds, we can see it in dolphins and elephants and octopi.

      There are some seemingly robotic insects and sea creatures, but due to evolutionary pressure I'd expect non-conscious instinct to serve only so far. Consciousness, in some degree, is the superior adaptive approach to survival and reproduction.

      Whether focus, learning and understanding of what is going on is actually present is revealed through behavior and interaction.

      Our current implementations of deep learning and neural nets are not organized in a way that provides that or leads to consciousness. They are very good at fitting empirical solutions to problems, given enough examples. So are genetic algorithms. But neither of them are conscious; neither of them can extrapolate from a single example or a handful of examples to a general rule.

      Delete
    38. Lawrence,

      Re Lawrence Crowell 1:23 PM, July 01, 2020:

      It is not necessary to define consciousness in order to decide whether or not computers/ AIs are conscious. One merely has to ask: “what is the information content of this consciousness?”

      Seemingly, there are only 3 possible types of information that can be input to the sensory systems of living things. These 3 types of information are the source of the conscious information that living things experience: 1) categories; 2) numbers; and 3) patterns in the numbers. What I mean is that there are categories of variable which seemingly only exist as lawful relationships (like the frequency of light, pressure, or relative mass or velocity); there are numbers that apply to these variables; and then there can be special patterns in the numbers for some categories of variable (clearly, these patterns can only be perceived by analysing the numbers, or sets of numbers).

      The universe runs on the categories and numbers, but is there anything in the universe that can perceive patterns in the numbers? Clearly, living things need to perceive patterns in the numbers in order to distinguish features (like food or danger) in their surrounding environment. But I’m saying that computers/ AIs: 1) can’t know that arbitrary man-made symbols/ patterns are hidden within sets of their own high and low voltages; and 2) these arbitrary man-made symbols/ patterns couldn’t be interpreted by the computer anyway. I.e. the only possible information content of any computer/ AI consciousness is category and number information for the voltage category: the computer/ AI can’t see or interpret the patterns in its own voltage numbers.

      Delete
    39. Dr. Castaldo, thanks for the reply and for all the detailed and interesting information provided in it.

      I am not sure how to resolve total im-plasticity of the visual cortex with the MRI scans showing activity there in the blind-folded grad student (which ceased during Braille-reading some time after the blindfold was removed, along with a decrease in her Braille-reading ability). Again, not suggesting the visual cortex can be replaced by other neurons, but that the visual cortex neurons and/or synapses might be repurposed to some degree with need and practice; but I must defer to your expertise.

      I agree one can take a simple analogy too far. The main point, which I hope we agree on, is that the brain is a nanotech biological machine working in accordance with known physics, and what a biological machine can do, can in princple be emulated by a sufficiently complex electronic machine, albeit one of such complexity that we may never have the resources or motivation to build it.

      That is, I feel the success of computer systems and programs at such tasks as language translation, medical diagnoses, Go, and Jeopardy establishes to my satisfaction the "no magic or unknown physics necessary" principle.

      What about the general ability of the average human to respond to new, untrained situations? Overrated, in my view as a mechanical design engineer. Trial and error with as much parallelism (independant triers) as possible is what we typically do in such situations. As I have mentioned, it took the human species over 100,000 years to leave any archeological traces of the wheel and axle. And as I write, a lot of political leaders and average humans are responding poorly to the pandemic.

      Another point I have referred to in this regard is that feral children show few if any of these "natural" abilities, e.g., keeping track of language nuances.

      It is a formidible task to duplicate what billions of years of biological evolution have accomplished, working as it did in a extremely massively-parallel way. The human species may not be capable of it. I don't mean to downplay the magnitude of the job but as in principle we could make another moon by building Project Orion spacecraft and jamming asteroids together, it seems physically possible to me to emulate brain functions electronically.

      As for a conscious computer entity, my guess is that if we build it, it will come (Field of Computer Dreams). That is, given complex sensory, manipulation, computation, and self-adaptation powers of sufficient magnitude (say something like 100 billion super-conducting parallel processors, as a start), a computer entity would be as aware of its surroundings and able to direct its processes as we are. It seems to me that it is the abilities which count, not subjective sensations, which would not be duplicated nor need to be. This last part (consciousness) has not been demonstrated and may never be due to resource limitations and lack of motivation, but follows from my "no magic involved" assumption.

      Delete
    40. In quantum mechanics the distinction between the medium for knowing and the subject to be known is lost. QM is such that ontology of the subject and epistemology on the subject are uncertain. With QM physics is in the domain of Wittegenstein's dictum the language is primary. Qubits = quantum states, and the subject is the language.

      The one thing clear about consciousness is if you have N people talking about it, then N is the lower bound on the number of opinions. I am not saying a conscious machine is impossible, but at this time it is not clear how that can occur. It is also not clear how we can objectively know a machine is conscious, even if it is.

      Artifical Neural Networks (ANNs) were pioneered as a sort of model of brains or neurons. Most researchers refrained from saying this was about developing conscious systems. Interestingly the mathematical foundations of this is similar to Lagrangian dynamics and Finsler geometry. I am not as knowledgeable on deep learning, but this seems to be a variation or something new on the previous idea of back propagation on and ANN. I would be surprised if this has currently produced a conscious entity. Will it be able to in the future? I have no idea.

      Delete
    41. JimV: A blindfolded person that presumably can see (else a blindfold is unnecessary) has an intact visual cortex; activity there just suggests something well known: The brain is not just massively parallel, it is not organized as a DAG (for other readers, a Directed Acyclic Graph) in which signals travel in one direction. In a DAG there are no cycles so no feedback loops; no signal output can come back around and trigger its own input.

      In the brain and nervous system, cycles abound; to amplify small signals and pick signals from noise.

      The visual cortex actually gets stimulated by memory and imagination. Remember the face of your best friend laughing. Your memories will activate parts of your vision system and you will "see" it in your head. You are not actually seeing anything, just re-activating the same neural systems that were active when you DID see it. I am not surprised a blind-folded person performing an action is internally "seeing" things provided by memory or imagination.

      Consciousness is likely a feedback loop as well. There is a cycle in thinking, of “abstraction” and then concretizing, or particularizing. For example, "Oh, these conditions can be expressed as a differential equation". Particularization is "It's a partial differential equation," then "I can solve that with Green's Theorem", followed by interpreting the solution applied to the original problem.

      Basically we recognize a problem (or object, or situation, or action) fits a "general" model, and using that, develop a specific model we use to address the problem.

      Consciousness can just be the endless looping of this dynamic, something (perhaps sensory) reminds us of some model, that leads us to another, on and on forever, because the brain is filled with cycles. If we have a specific goal in mind, this wandering can take on direction and avoid cycling; coming to either a conclusion, a dead end, or a research agenda (say "I can't get Green's theorem to work, so what else looks promising?")

      All that to say, I don't think consciousness requires a boatload of processing power. It requires a lot of models to create a non-trivial (unpredictable) loop, but I think, in experiments, that mice display consciousness (and the researchers doing those experiments conclude the same).

      A mouse brain contains ~75 million neurons, about 110M cells if we count support cells (which we should). That is easily in the realm of building today.

      Using this model I have described, there is not necessarily any "seat" of consciousness, it is a weakly emergent property of the organization and interconnection of models (built out of neurons) which we also know are influenced by non-neuron support cells.

      All our current attempt at AI are result oriented, we want to interpret sound waves as words. Although cycles like back propagation occur in training, the resultant net is a DAG or has tightly defined fixed feedbacks. That cannot be conscious, by self-examination we know consciousness is a cycling phenomenon, of thoughts triggering other thoughts ad infinitum, until we are interrupted, by sensory information, or just running out of energy and falling asleep.

      That is why current approaches won't work to make conscious machines, we don't build abstraction/concretizing cycles, we don't build thousands of interconnected mental models of things to use in that cycle. We build AI to do specific things, like model protein folding or speed up quantum chemistry models. The AI we build with any chance of consciousness are self-directed cars, probes, etc with feedback loops built in, but they aren't big enough.

      The hardware and expense is not the issue. We just don't have much of a blueprint on organizing the hardware. It's like "I know how a bridge works" but not enough to draw you a blueprint and parts list.

      Delete
    42. Dr. Castalso, thanks for the additional reply and additional information, which has broadened my horizons on the subject.

      You raise the possibility of extraneous use of the visual cortex in the blindfold example, which should be considered. However, that activity coincided with a measured increase in Braille-reading speed. Also, after the blindfold was removed and the same reading trials were continued, gradually the activity decreased and disappeared, also coinciding with a measureable decrease in Braille-reading speed (back to its original capability before the blindfold). (I saw this on an episode of NOVA on PBS in the 1990's.) (The test subject said that her fingers felt more sensitive to the Braille dots during the blindfolded sessions, for what that is worth.)

      Probably in that same episode, a similar experiment involved applying a fitted set of lensed goggles to a test volunteer. As you know, the single-lens system of the cornea focuses an image on the retina upside-down, and the visual cortex corrects for this. The additional lenses of the goggles caused the image on the retina to be rightside-up. After a month or two of wearing the goggles continuously, the subject began to see things the way she used to (rightside up, having seen them upside-down with goggles at first). Then, when the googles were finally removed, for a while she saw everything upside down, but again recovered normal vision.

      Final example: a few years ago I had double-vision due to pressure in my right maxillary sinus cavity. I first noticed it when I saw a double-decker motorcycle go by (!). It became more and more apparent, making tennis-playing impossible. I had sinus surgery which relieved the pressure, but the right orbit was still distorted, and a follow-up surgery on it was contemplated. However, the eye surgeon recommended I wait a few months to see whether my brain adapted. Which it did.

      These examples had me convinced that the visual cortex is capable of some re-training, but perhaps you have better explanations. (No need to reply, of course, I am sure you are busy with better things to do--I'm retired.)

      Delete
    43. Lawrence,

      “Qubit” is a concept which can be physically implemented in various ways. The physical implementation of the “qubit” concept requires a specially prepared quantum state in a special setup. So clearly, a “qubit” is not an elementary fundamental thing that exists in the world: what is called a “qubit” is actually a consequence of quantum mechanics, the preparation, and the setup.

      A “qubit” is not “an elementary form of a symbol” [1]. Symbols are things human beings use for their own purposes. The most elementary symbols human beings ever used were special primitive sounds, and special primitive markings on clay tablets. These types of symbols are only noticed by brains/ minds that can recognise symbols and interpret the symbols: this is so routine that the people who possess the brains/ minds are almost always oblivious to the fact that they are using symbols. A “qubit” is not a symbol unless human beings decide to use “qubits” as symbols. I think this is probably the way to look at “qubits”: There is a known mathematics which can be used to represent/ symbolise quantum mechanical behaviours, and so in turn, we can use quantum mechanical “qubit” outcomes to represent/ symbolise the answers to some mathematical problems.

      Anyone who doesn’t understand what symbols are will never understand how computers are made to work, and they might make the mistake of thinking that computers could be conscious. Computers/ AIs can NEVER be conscious because internally they are symbols from beginning to end, start to finish, go to whoa.

      1. Lawrence Crowell 8:42 AM, June 30, 2020

      Delete
    44. JimV: The brain is capable of retraining; the "visual cortex" does not mean the entire visual system; it means a very small part of the brain.

      Google Vision processing information BrainFacts and take the link with the guy looking at a cup of coffee. From that article:

      "Visual information from the retina is relayed through the lateral geniculate nucleus of the thalamus to the primary visual cortex — a thin sheet of tissue (less than one-tenth of an inch thick), a bit larger than a half-dollar, which is located in the occipital lobe in the back of the brain.

      The primary visual cortex is densely packed with cells in many layers, just as the retina is. In its middle layer, which receives messages from the lateral geniculate nucleus, scientists have found responses similar to those seen in the retina and in lateral geniculate cells. Cells above and below this layer respond differently. They prefer stimuli in the shape of bars or edges and those at a particular angle (orientation). Further studies have shown that different cells prefer edges at different angles or edges moving in a particular direction."


      The first stages of vision in the primary visual cortex are well ordered and automatic; they detect edges, color, contrast, horizontal, vertical and 45-degree angled lines, etc. These "components" of vision have nothing to do with up/down/sideways vision; this is not the layer for that, and these neurons cannot be retrained. But of course they provide the raw material of vision; if these layers are destroyed (and this has been intentionally done in animal experiments) then vision is lost, permanently. The rest of the vision system has nothing to work with.

      The "micro" patterns recognized by the fixed-purpose neurons are then themselves processed for relationships to recognize larger patterns; e.g. arcs or circles, edges, fields of color, and that process repeats and repeats, finding patterns of patterns until we connect them to millions of "models" of objects.

      There is feedback and trainable parts in the entire vision system. Even quite early in those first layers, we have the mechanisms for shifting the gaze in "saccades", one of the fastest movements the body makes, shifting the eyes to focus the fovea on different parts of a scene as much as fifty times per second.

      But to your point, yes, later parts of the vision system may be retrained, as most of the brain can be. The reason the upside-down image takes weeks or months to "flip", both originally and then to flip back after removal of the glasses (or in your case pressure), is that something has to grow to change the function; new synapses have to be formed. And for the flip back, dissolved or disabled. That's why it takes so long.

      It's also why it takes so long for us to learn to ride a bike, read, learn to type or play a musical instrument without halting or thinking. Learning is not entirely a mental process, but a physical growth process, like growing vines.

      Delete
    45. Dr. Castaldo, thanks to your efforts I can see my mistake now. I was using the term "visual cortex" incorrectly. When I said that in the first experiment the "visual cortex lit up in MRI scans" I should have said something like "a part of the brain in the occipital region, associated with vision" which is closer to what was actually said in the NOVA episode. I am used to inferring the meaning of new terms from context, which is a form of trial and error, and often results in error, at least on my part.

      With that distinction clearer, I see why you rightly objected to the idea of retraining the visual cortex. In summary, you of course were correct and I was wrong. Thanks again for the clarifications.

      As an aside, I have long thought that elementary programming should be a required subject in schools, at as early a level as possible, as it basically consists of applied logic. I now think that brain mechanics such as you have described would also be a good thing for students to learn, in particular the concept that learning skills takes time because of the synapse growth required. The better one understands how a machine operates, the better performance one can get from it, with the least frustration, I think.

      Delete
    46. The human being is a primate, and comes from this group and inherits from it a primitive language of sound symbols that is extremely emotional; socialization and cooperation as a function that developed our consciousness is also very emotional although this type of emotionality seems attenuated; all subsequent symbology has been created on that basis; it is foolish to think that something like the symbolism that is the final product of thought is at the same time our conscience, considering that we do not need any scientific or symbolic clause to hunt a gazelle with an arrow; skipping all evolution to say that we are a computer doesn't seem to be realistic.

      Delete
    47. Luis: Intelligence and consciousness does not require either sight, hearing, or the ability to utter sound; nor does it require socialization or cooperation, although those do seem to promote it.

      All it requires is varied experience.

      Symbolism is NOT "the final product of thought". Symbolism begins with the most primitive of neural structures; an electrochemical signal on a nerve is symbolic of an event or experience. Heat, cold, light, dark, touch, pressure, movement (eg. of hairs), sound, whatever.

      The brain doesn't feel wind, it receives messages from nerve endings, symbols, that together form a pattern that is interpreted as consistent with the neural model of sensations labeled "wind".

      Nobody is skipping all of evolution to say that we are a computer. We certainly are not, there is no computer on Earth that is as thoroughly parallel, as massively interconnected, or as relentlessly illogical as a human mind.

      Evolution, in the neuron, has stumbled upon an organism that is roughly computational, but not anywhere close to being as reliable as a transistor built logic gate. It still consumes energy, requires resources and produces waste. It can fire when it shouldn't, and fail to fire when it usually would. It can tire (run out of energy). It can be affected by its chemical and physical environment much more readily than transistors are.

      But it is roughly computational, it takes inputs via synapses, it recognizes patterns, and it sends signals via the axon to the inputs of other neurons. Or vice versa: Many neurons are simple "expectation" devices, they only fire when the inputs CHANGE from their expectation.

      (In a way, for both cases, the most basic emotion is "surprise", something is different.)

      You do need symbols to hunt a gazelle with an arrow, or any other weapon. You need a symbolic understanding of how to create the weapon, how to use it, and of gazelles and their behavior and their vulnerabilities, even a symbolic understanding of what the gazelle represents: Food and raw materials for making weapons and leather.

      Symbolism is not the final product of thought except in a nominal sense, it pervades thought, from start to finish. The true final product of thought is taking a specific purposeful action in an attempt to achieve a desired outcome.

      Delete
    48. Castaldo,

      Once again, you show that you don’t know what symbols are. The power and the essence of a symbol is that it doesn’t exist in lawful relationship to anything: the written or spoken word “apple” does not exist in lawful relationship to anything. The word “apple” can only be understood by human beings who have pre-existing knowledge of how to interpret patterns in the light and/or sound waves. A blackbird can’t interpret the written or spoken symbols “apple”, but it can identify an actual apple presumably because an actual apple is potential food.

      What is happening inside the brain/mind is not symbols, because what is happening inside the brain/mind is of necessity constructed out of lawful and/or logical relationships. The power and the essence of a symbol is that it doesn’t exist in lawful relationship to anything.

      Delete
  20. A great explanation. I would add that the foundation of a science is an external, objective reality, that exists independent of the observer. That reality is universally experienced by all sentient creatures. It assumes that our view of reality is quantifiable, testable and falsifiable. As hypotheses are tested, hidden variables are revealed and resolved. As it resolves paradox it defines an ever deeper truth of reality. It is separated from scientism by its willingness to be falsified.

    Science has exemplified itself in explaining the how of the world. It's power comes from its alignment with technology. It does not matter to the average person whether disease is caused by a demon that is chased away by holy water or an elixir made of antibiotics that kills pathogens. Technology can exist without science. Transforming ore into metal was well known to many societies. Transforming iron into a primitive form of steel was also discovered by driving a hot iron sword into a human being. In some cases the explanation was that the iron absorbed the life force of the individual.

    Science is a human invention. It is subject to the economic principle of cost/benefit analysis. As the low hanging fruit of science has been harvested, the cost of evaluation of reality has gone up geometrically

    As science approaches the asymptotic limit of knowledge it comes up against a barrier called information. Information is quantifiable, valuable but has neither mass nor energy. It is real. It is invisible. It is the foundation of reality. But how information manifests itself in the physical world is a conundrum.

    And it leads us to a place that science lacks the tools to go: the why of everything.

    ReplyDelete

  21. There is a very subtle form of conspiracy that does not need meeting and agreements; it works out of ideological affinity; typical case, the mass media promote that narrative that is related to their own objectives and make unfeasible those facts or narratives that are not their objectives; the other media replicate the news due to ideological bias; however there was no conspiracy.

    ReplyDelete
    Replies
    1. Luis, that's an interesting comment which I think is somewhat orthogonal to this discussion. I believe that you are writing of the various cognitive biases we are all prone to in evaluating evidence. The one to which you seem to specifically refer might be a bias supporting confirming evidence and dismissing its opposite. But I am not certain how this can make a theory more or less "scientific."

      Delete
    2. Thanks Owen for commenting; you are right, it does not chain with any cognitive bias; but it is an unconscious way of conspiring; of a type of involuntary militancy, of an action by ideological identity, and if it is necessary to take into account when the study of physical reality intersects with politics.

      Delete
  22. To the contrary. If Darwin were right, you would be able to reply with your "fitness function" and you would be able to forecast "natural selection". But you can't because there is no such thing as "fitness" or "natural selection". On the other hand, Paley was right: watch-watchmaker, universe-universe maker. And Occam is satisfied.

    Guess why "it should be possible to quantify this fit (Darwin's) to data, but arguably no one has done that"? Precisely because "evolution" is not scientific. Also, guess what? You can quantify Mendel's observations. Because unlike Darwin, Mendel was doing real science. That's why. Science IS quantifiable.

    Incorrect: "Darwinian evolution is a good scientific theory because it “connects the dots” basically by telling you how certain organisms evolved from each other." Telling a story is myth, not science. Unless of course said story can be observed which is definitely not the case for Darwin's story.

    ReplyDelete
    Replies
    1. "If Darwin were right, you would be able to reply with your "fitness function" and you would be able to forecast "natural selection". But you can't because there is no such thing as "fitness" or "natural selection".

      You are very confused about how Darwinian evolution works. What counts as "fitness" and what means "natural selection" is defined by the environment. You don't postulate it.


      "Telling a story is myth, not science. Unless of course said story can be observed which is definitely not the case for Darwin's story."

      That's wrong too. For starters, I suggest you watch that, and then discontinue making ill-informed comments about topics you know very little about, thank you.

      Delete
    2. Of course "fitness" depends on the environment and for that precise reason it is not scientific. It is not observable, not measurable, not useful to forecasting, therefore NOT science. It is just a measure of our ignorance like black matter and black energy. Think!

      Antibiotic resistance is just a built in feature of ALL organisms, you and me included. It goes away when the stimulus is removed (hence only prevalent in hospital-like-environments and weak organisms - thank God) and NEVER EVER leads to any new organisms. Like antibiotic resistance, Darwin's Finches REVERT when the stimulus is removed, and so does the Peppered Moth, the LTEE E.Coli, the feral animals and all else. Experimentally, nothing, and I mean NOTHING EVER "evolves". Hence, not science. Think!

      If you can't observe and can't forecast, it is not science. Explaining is not sufficient. Forecasting is the key. Astrologers, phrenologists, tarot readers, etc, all "explain" but none forecasts with any accuracy. That is the difference between science and pseudoscience. And as such, "evolution" falls squarely in the pseudoscience realm. Think.

      Delete
  23. Your comments on Darwinian evolution are curious. Mankind had been breeding dogs, sheep, cows, etc. for millennium. The notion that favored traits could be selected and passed on to offspring was around since before written history. And that random mutations sprung up in nature was written about by Josephus 1800 years before Darwin. (There was a debate about whether a certain mutation was caused by nature or by god, as it was well established at the time that such mutations could be caused by nature.)

    So why was "Darwinian Evolution" so popular? Charles Darwin's father wrote the exact same theories, so the son was not really saying anything new. And ultimately, the entire "theory" can be summed up in that the species that exist today are the ones that have survived through history -- a simple tautology. Perhaps it was the observation of all those finches?

    But more likely are social reasons for his popularity. Published in 1859, the "Origin of Species'" popularity can be found by reading the title... "On the Origin of Species by Means of Natural Selection, or the Preservation of Favored Races in the Struggle for Life." Which "Favored Races" were going to be preserved in 1859? (Hint: the longest war in US Army history, lasting from 1790-1891 https://history.army.mil/html/reference/campaigns.html).

    And Darwin himself says as much in Decent of Man:
    “At some future period, not very distant as measured by centuries, the civilised races of man will almost certainly exterminate and replace throughout the world the savage races."

    So which part is science? And which pseudoscience?

    Curiously, Darwin used his theory of evolution to argue against vaccinations: "There is reason to believe that vaccination has preserved thousands, who from a weak constitution would formerly have succumbed to small-pox. Thus the weak members of civilised societies propagate their kind. No one who has attended to the breeding of domestic animals will doubt that this must be highly injurious to the race of man."


    Science? Pseudoscience? And if the only part of the theory that is "science" are the past observations of finches shouldn't this really just be called history? What is the point in any science if it can't tell us anything about what will happen in the future with reasonable probability? Isn't that what historians do? Take a bunch of observations about past events and make a simplified model of what has happening with limited to no predictive value?

    ReplyDelete
    Replies
    1. MTracy,

      As I have explained before, natural selection stops being a driving force of evolution if some species can significantly alter the environment to suit their needs. Mankind reached that point some thousand years ago. At this point you get a background-dependent system which is still adaptive, but is to a large extent driven by cultural norms that are much harder to predict and model than selection by a fairly stable natural environment.

      I don't see what you think were the difficulty is. Darwin put forward a model to describe observations. He then opined on what humans should do based on that knowledge, which is neither science nor pseudoscience, it's a statement about his values. Which, well, you may not agree on. But that's besides the point.

      Delete
    2. I'd like to point out that most of your argument is completely pointless. The theory itself is scientific or not. To judge that, it does not matter who said what when or if Darwin borrowed from his father or if there were first hints thousands of years before. It also does not matter who likes it or does not.

      However, I found some of your remarks quite interesting even if they are not relevant to the argument.

      Delete
    3. Darwin had great interest in orchids. There is a species of orchid with a long fluted neck. Darwin in his Origin of Species wrote there should exist an insect with a long proboscis or bird with a long break that takes nector from this flower. In the 1990s with the use of IR camera a moth was recorded feeding on this flower. In this sense we can say the theory of evolution has a predictive property. It also give some explanation for both the orchid and the moth as co-adaptive species.

      I wrote earlier this morning on the nature of selection. It should appear today below. I will then not repeat that. Natural selection is how the environment a species faces selects for certain genotypes. Actually it more selects for phenotypes, but I will maybe get into that later. The selection pressure on the gene pool of one species quite often is from other species or gene pools, and this species is then facing some selection pressure as well. It is a huge and complex web.

      Darwin was a product of his time. Europe in the early to middle 19th century was highly imperial, with Great Britain at the top. The UK ruled over empire populated by a wide range of cultures and appearances. Every culture tends to define beauty in it's artistic depictions of themselves. Also the British Empire encompassed people who did things like head hunting. The quick reaction is one of reoulsion, where the quick judgment is these people are lesser.

      Question: Who started the African slave trade? Answer: Africans. In fact the African slave trade continues. The next question is who took control of it first? Answer: The Arabs or Muslims in North Africa and Andalusian Spain. It was after the Reconquista in 1492 that Spain got into the action, followed by England and France.

      That biological evolution got mixed up with biased concepts of humans against segments of humanity is not surprising. Also if people in group A are enslaving people in group B, then various ideological constructs for how this is the natural order of things, in previous times it was God's will, come about.

      This does not though negate the main idea of biological evolution. As unfortunate some of Darwin's statements were in his later The Descent of Man were, these do not destroy the basic concept of evolution.

      Delete
  24. Shouldn't "show was better" be "show it was better"?

    ReplyDelete
  25. There is a lot of confusion about natural selection. It might be best to just consider selection based on a fixed situation. Consider some gene that expresses some polypeptide. Then consider there is some single point polymorphism, where C becomes a T. The polypeptide will then have an amino acid residue difference. Now suppose in this toy model one form of this gene has a 2/3 probability of being passed on to the next generation, while the other only 1/3 probability. This means with each generation the frequency of the first form of the gene will increase. I leave it to the reader to calculate this. Over time one form of the gene dominates or reaches completion.

    The polypeptide by this successful gene by some means performs better than the other. This is what selection means, and it can be done in a lab setting. Natural selection is provided by the environment the species confronts. How this works is far more complex. The selection filters can themselves change. These can be climate, chemical changes, other species and completion within a species for food or mates.

    It is said above that Paley is the simplest explanation. Beyond the fact creationism explains little, in an omphalitis setting the theory is equivalent to the result. Nothing is explained. With this we might as well just throw in the towel and say the answer to all questions is, "God did it." The middle ages, a gothic period considered by many as a dark age, had this dominant form of reasoning.

    ReplyDelete
    Replies
    1. Lawrence: Dammit, you made me look up "omphalitis"!

      :-)

      Delete
    2. I meant omphalism but it got autocorrected to omphalitis.

      Delete
  26. Sabine, I just want to reply to your note in which you wrote: "This fits very well with the data, even if no one has yet quantified how well it fits. " as regards evolution. Actually, there have been attempts to model evolution statistically, in particular the Price equation, of which, you can learn more here. https://en.wikipedia.org/wiki/Price_equation It has not been without controversy.

    ReplyDelete
    Replies
    1. Hello Owen,

      This is very interesting, thanks for pointing out. I hadn't heard of it (or maybe I did and forgot). Alas, I had in mind something much simpler that would be more directly data-based. A quantification basically of what we do visually when we say: This organism fits between these two. One can probably train an AI to do that kind of "tree discovery" and you can then quantify how far off this is from "no pattern" basically.

      Delete
    2. Yes, Sabine, actually, what you're talking about is related to morphological analysis, employed by generations of museum researchers who specialized in alpha taxonomy. My mother was one such, a biologist at the Field Museum of Chicago, where for many years she wielded calipers in an attempt to puzzle out speciation and relatedness amidst New World primates, and other New World mammals. Much of that work has been superseded by genomic work. This has also shown that environmental pressures yielded similar results amongst species genetically quite diverse, causing them to be placed mistakenly much closer to each other in a cladistic than should have been the case. So science marches on.

      Price’s work is quite interesting, there has also been plenty of “island ecology” math put into play since the 1950’s by such brilliant theorists as Hutchison, MacArthur and Pianka. Wonderful, that there are so many curiosities in the world, an endless pleasure for the inquiring mind!

      Delete
    3. Sabine: I seem to recall in Mathematics (at A&M) a project being done for AI (with military applications, naturally) in "form matching", devising scale-independent methods to quantitatively measure how well a given shape in an image (or series of shapes in a video) matches a given shape. They did not want it dependent on image size, basically. Other approaches can add planar rotational independence. Perhaps that can be extended to 3-D rotational independence if we scan 3D models.

      It seems like that could apply to fossils easily; including ordering by similarity, e.g. A,B,C are similar, but the ordering ABC or CBA has a lower sum of similarity differentials (AB + BC) than orderings where A and C are adjacent (ACB, BAC, BCA, CAB).

      Thus B is 'between' A and C, as a transitional model.

      Delete
    4. Sabine, you have said:
      “What scientists mean by “explanation” is that they have a model, which is a simplified description of the real world …”

      I think that we should ask more precisely: What is a model? Is just a description sufficient for this classification?

      Example planetary motion: If we observe and say that planets move on a circle, is this already a model? Really? Or they move on ellipses? That was the new insight of Kepler. But is it already a model? Should a model not explain WHY planets move on ellipses?

      This WHY was historically explained by Newton. Newton gave us surely a true explanation, yes. But is in contrast just the identification of a shape like circuit or ellipse already a model? Where is the demarcation between description and explanation?

      There is one criterion which could be used: The physics of Newton is the reductionist explanation for the ellipses. Should this not be a necessary condition for a model?

      Delete
    5. Antooneo: As Sabine said, a model is a simplified description of the real world. I would add that it has to reduce variance (variability) in order to be considered a valid model; reducing variance is what we mean by "explaining" something in the real world.

      Yes, just description is sufficient. Darwin's notion of natural selection was a borrowing from the long extant idea of selective breeding of plants and animals. But in that case humans select the traits for which they wish to breed or enhance; Darwin's idea was that nature itself "selects" by virtue of the natural competition for resources (water, food, territory) and mates. He already knew from farming and animal husbandry about natural variability in animals; he was inspired (reportedly) by Malthus's prior observation regarding exponential reproduction that had to be curtailed based on the availability of resources, and put all this together in a cogent and compelling explanation of evolution. That is still a model, it explains a great deal.

      Yes, saying planets move on a circle is a model. It reduces variability in the measurement of planetary positions by huge amounts, compared to a claim they can appear anywhere they want, or sit at fixed points, or other such notions.

      But there would still be variability, planetary orbits are not 100% circular. Elliptical orbits eliminate most of that remaining variability.

      But not all of it. Orbits are still perturbed from their ellipses by other orbits and objects. Taking those into account again reduces variability, and we don't have a simple shape name to slap on the result; it is kind of a bumpy ellipse.

      No, a model does NOT have to explain WHY. It just has to reduce the variability of measurements. WHY is certainly welcome and can lead us to new hypotheses to test, but it is not a necessity. For example, thinking of the Higgs as a "why", we proved the existence of it by noticing that it explains some bumps in statistical graphs that did not fit a model without any Higgs.

      The Higgs explained away some of the variance in the graphs as compared to our previous model of fundamental particles; just like ellipses explained away some of the variance in the graphs of planetary orbits as compared to the circular model.

      Models that do a very good job of explaining away variance can be clues to underlying laws.

      But there is no demarcation between the model and an "explanation", Newton just came up with a mathematical model (gravity and the square law) that could be applied to compute orbits with greater precision. It is still a model, and like all models, an "explanation" of variance.

      But I will note, we STILL do not know what gravitation itself is. Einstein's theories are also models, but as "Dark Matter" demonstrates we still have significant unexplained variance when applying Einstein's theories. And (I gather) most physicists believe Einstein's reliance on continuous deformation is a flaw in the theory, gravity must be quantized, and that might explain more of the observed variance by doing away with the paradoxical infinities in the theory that result from using continuous deformation.

      Quantum theories are also incomplete models. As far as I know, we still do not know why or how wave function collapse occurs. Obviously we can trigger it with "measurement" but what exactly constitutes a "measurement" is pretty slippery and still open to debate.

      No, models should NOT be required to answer WHY. All they need to do is reduce variance. Darwin's did not know of DNA and genes, all he knew (along with millions of people engaged in the raising of plants and animals) is that some changes spontaneously occurred and were heritable. The later discovery of DNA, genes and genetic mutation fit beautifully into Darwin's theory of evolution, as the "WHY" you seek, but it may well be that Darwin's theory contributed to people wanting to seek the underlying mechanisms.

      (Continued)

      Delete
    6. Without the data from the "WHY-less" elliptical orbit model, Newton would have had nothing to work with to develop his model of gravitation.

      The only condition for a model is that it reduce variance, versus any previous model or no model at all.

      As Lawrence alluded to earlier; this is why the God model of the universe is empty: If God can do anything, then the model reduces no variance. It explains nothing. And moving the variance from nature to a hypothetical God doesn't count as "reduction."

      I think something similar is Dr. Hossenfelder's argument against a multiverse: Moving the variance into unobservable hypothetical places doesn't reduce it, thus a multiverse doesn't reduce observable variance, and explains nothing.

      A valid model must reduce observable variance. If it does not, it is not a valid model.

      Delete
    7. @ Dr. A.M. Castaldo

      I present QT without any collapse of the wave function or measurement problem in my essay "The Mathematical Foundations of Quantum Mechanics" on my website.

      Delete
    8. Prof. David Edwards: The collapse of the wave function is an observable event. I don't know if it is gradual on a time-scale we cannot measure, or instantaneous, but it is an observable event. That is why it is a mystery, it is not something demanded by the Schrödinger equation, which is time-reversible, but the wave function collapse is NOT time-reversible. Information is lost (the eigenstates carried by the Schrödinger equation), and that is irreversible.

      I'd be wary of any paper or position that claims an observable event does not exist. Since I am not a physicist, your essay will have to pass extensive peer review before I'd try to understand your position.

      Delete
    9. Dr. A.M. Castaldo:

      I did not intend to discuss Darwin at this place. I have dropped my comment at a wrong place and I apologize for it.

      To the question of “model”: what is the threshold for it? Where is the demarcation to a mere “description”?

      If one has a collection of 5 measurement points, one may be able to fit polynomial of 2nd degree through it. This will reduce the number of observable parameters, true. But it is a quite primitive algorithm which could also be automated. Compare it to Newton: he introduced a quantified notion of “inertia”. That was a real step which took physics considerably forward. It has caused real new understanding. And here I find the term “model” appropriate.

      So, my point again: to call an idea a “model”, it should have a certain degree of additional understanding. Best criterion: to open a reductionist understanding. Otherwise we may run into an inflation of good-sounding terms.

      Delete
    10. @ Dr. A.M. Castaldo,

      1. The wave function is simply a convenient way to represent the state which is a probability measure on the quantum logic of questions of the situation. The wave function is not an observable!

      2. My essay did pass extensive peer review and was the lead article of a special edition of Synthese (the leading journal in the philosophy of science) on the foundations of quantum theory.

      Delete
    11. antooneo: As I said in my post, the "demarcation" is a reduction in data variance.

      True, given 5 points in a 1:1 correspondence with the X-axis one can fit a line, parabola or cubic function to them, also a an exponential function, a sinusoidal function, whatever you like! They are all models!

      How well they function as models depends on how much variance they reduce; ones that reduce more variance will be better models.

      There is another factor I failed to mention. A reduction in variance within the data can be represented as a reduction in the information content: The original data contained X amount of information, which will be mathematically related to its mean and variance; the model replaces the mean and the new variance is computed as deviations from the model; and its information content will be reduced; call that X', but we must add to that the information content of the model parameters themselves.

      So we might have a model, for 5 points a quartic function can fit them perfectly (using 5 parameters) but one is probably not "explaining" anything by replacing five data points with 5 parameters; unless several of them are zeros. For example, f(x)=x^4 requires less information to represent than five values of [1,16,81,256,625], and I'd call that a good model.

      All that is required to be a model is a reduction in variance; or to be more precise at the expense of less measurable and understandable, a reduction in the total information content of the model itself and the residuals of the data it describes.

      You are correct only in the sense that while modeling a fixed amount of data such as 5 points that will never change, this is the only criteria. But if the modeled points are only a sample of the entire population (or an effectively infinite population, like primes or stars or subatomic particles) then it becomes a matter of human judgment to decide whether a model makes physical or logical sense.

      If those five points we are given above are the ages of buildings in Europe, then a quartic model makes no sense for predicting ages of buildings (it would only make sense as the cherry-picking formula used by someone to pick the buildings to represent).

      It is up to humans to decide if the measurements we are given to model are rational choices TO model in the first place.

      But then, it is also difficult to argue with overwhelming evidence; such as the notion that at least locally, on Earth, in our solar system, in other solar systems, gravity follows Einstein's formulation; to the best resolution we can measure. We don't need any reason for that formulation (and Einstein's reasons may turn out to be wrong) but the model (for local phenomenon) seems to work without exception, thus it surely has some relation to an underlying reality.

      One could not claim that for the five samples above, if they were the ages of five specially buildings on a London street.

      This is also related to information content: One could not select just ANY buildings on a London street and find that quartic relationship; but one can select just about any two objects and measure their mass and the gravitational pull between them, and find the same square law.

      Cherry picking data is a way of manipulating and artificially increasing the information reducing power of a model; because then the model has hidden the extra information that represents the cherry picking process, which itself reduces the variance in the raw data; and presents the smoothed data, which better fits the hypothesis, as if it were raw.

      Delete
  27. Is "biolab leaked virus" hypotesis unscientific?

    ReplyDelete
  28. A few nights ago I watched a TV show featuring a video of an apparent Sasquatch (bigfoot, in the popular lexicon) hurling an elongated tree trunk, javelin style, to what appeared to be a considerable distance. An oil worker at a construction site in Alberta, Canada is the source of the video. It sort of has the look of being authentic; e.g., not a hoax, but one can never be sure. The pity is, presuming it wasn’t hoaxed, is that no one bothered to go out to the landed tree trunk to examine it closely and determine it weight and length. The position of the dark subject in the video likely could also have been established by pressed down grass where it stood, when it chucked the tree.

    Assuming a prank wasn’t being played on the oil workers, a golden opportunity was missed here to distinguish science from pseudoscience simply by establishing if the overall dynamics of the event were outside human capacity. A Youtube video by Thinker Thunker, dated 19 January 2018, and 16.21 minutes long, provides a tentative analysis. He estimates the length of the log to be between 12 and 16 feet, with corresponding mean diameters of 4.5 and 6.5 inches. Based on data from a wood working website, that he cites, this translates to a weight of between 51 and 100+ pounds. For these parameters the median distance of the toss was either 38 or 50 feet.

    What’s interesting about this event is that it is consistent with other reports of alleged Sasquatch encounters, where objects like rocks and logs were thrown at witnesses as an apparent territorial reaction to human intrusion. There has been speculation by anthropologists, who don’t dismiss the Sasquatch reports out of hand, that relic hominoid species unknown to science could be extant in the remote forested regions of North America, and elsewhere in the world. Prompt data collection in the Alberta, Canada case, if it wasn’t just a plain hoax, would have provided further support for such a hypothesis, should the raw data have shown beyond human capability.

    ReplyDelete
  29. Anomalies entailing small magnitude, transitory, acceleration fields, or weight changes of ‘targets’, in proximity to superconductors, are probably considered to lie in the realm of pseudoscience by most physicists. This may be so, but there should be no harm in turning over some more stones in the search for such phenomena, even by amateurs. With rain and thunderstorms forecast for a few days making kayaking and cycling hazardous, I’m working on modifications to my old setup that sought to find such signals. I’ve sketched a design for a new horizontal cryostat that will incorporate fine adjustable spacing between a one inch diameter YBCO superconductor and a strong magnet. Lots of details need to be attended to so I’ll be busy solving various mechanical issues.

    ReplyDelete
  30. Dear Sabine,
    thank you for your very clear and neat explanations!
    Would you appreciate if I would translate the text of the video into German language? Subtitles would be nice for many German users. I tried it on Youtube, but "community contributions" are turned off for this video.
    (Or, maybe there is a German version of the text already?)
    All the best, Sebastian

    ReplyDelete
    Replies
    1. Sebastian,

      I just turned the community contributions on. You are more than welcome to do the translations. If you have any questions (or if I forget to approve them), please email me at hossi[at]fias.uni-frankfurt.de

      I have German subtitles on some of my earlier videos. However, I have seen from the statistics that the vast majority of viewers of and subscribers to my channel are from the US and UK. And while about 10% use the English subtitles (so it makes sense to have them), very few use the German ones. Since the translation takes up my time and it doesn't seem to have a big impact, I decided to not do this any more.

      I am sorry about this, but the only way that I can continue doing these videos is to trim down the time it takes to produce them.

      Delete
  31. That we grant consciousness to other humans is a social agreement of convenience. We don't normally want to have to wonder if the person on the other end of the line is human or not.

    Likewise we grant consciousness to our pets but don't like to think about the consciousness of the animals we eat.

    Over time, as machines become more interactive ordinary people will begin to relate to them as if they were conscious. Probably the end result will be constructing a different social agreement for machine consciousness. No doubt lawyers will be involved.

    ReplyDelete
  32. "Let us look at some other popular example, Darwinian evolution. Darwinian evolution is a good scientific theory because it “connects the dots” basically by telling you how certain organisms evolved from each other. I think that in principle it should be possible to quantify this fit to data, but arguably no one has done that. Creationism, on the other hand, simply posits that Earth was created with everything in place. That means Creationism puts in as much information as you get out of it. It therefore does not explain anything. This does not mean it’s wrong. But it means it is unscientific."

    I'm surprised you don't mention the theory of "intelligent design" here. Creationism normally is linked to religion, not science at all, whereas intelligent design is considered pseudoscience.

    All I demand from main stream scientists is not to stigmatize EVERY attempt at finding a non-Darwinian explanation for evolution as unscientific.

    As for the man Charles Darwin, today's evolutionary scientist avoid his name for several reasons:

    1) They try to create the impression that the term "evolution" inherently stands for the theory based on Darwin, this way insinuating that every criticism of Darwinism is tantamount to questioning evolution in general.

    2) They have progressed from Darwin's original ideas in several aspects. (He didn't even know about genetic mutation and recombination.)

    3) The name of Darwin is linked to social Darwinism which they don't want to be associated with.

    Darwin has been celebrated for centuries as the one who took on the church. But he is no longer needed. I believe it's only a matter of time before he will be attacked by "antiracists".

    ReplyDelete
    Replies
    1. Plankalkül: We scientists are not "stigmatizing EVERY attempt", we dismiss unscientific explanations. The theory of "Intelligent Design" is not scientific, it attempts to explain one mystery by laying it at the feet of an greater mystery, for which there is no evidence other than you don't understand the FIRST mystery.

      And apparently you have not actually READ Darwin. Nor do we avoid his name, I consider Darwin the best scientist we have ever had, including Newton and Einstein. I certainly consider the science that he founded the one with the most profound applications to preserving human life and reducing human misery, more so than any other single scientist.

      Nor do I associate his name with "social Darwinism", that was a hijacking of his name for racist, misogynist, and bigoted purposes.

      It certainly wasn't Darwin we saw busting his ass to invent nuclear weapons to kill entire cities at once, men, women and children.

      You don't know what you are talking about. The level of intellectual exploration that Darwin exhibits, especially without knowing about genes, is astounding.

      And hundreds of scientists have quantified and explored spontaneous mutations in plants, animals and bacteria, including adding stressors to guide the evolution of organisms to accomplish specific things. We aren't as ignorant as you seem to think.

      Delete
  33. Keep going, keep going. Criticism is the essence of Science.

    Einstein said : Cracking a personal opinion is harder than cracking an atom, and the hardest part in this cracking game is cracking your own opinions.

    Uli Dinklage

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.