Pages

Friday, June 28, 2019

Quantum Supremacy: What is it and what does it mean?

Rumors are that later this year we will see Google’s first demonstration of “quantum supremacy”. This is when a quantum computer outperforms a conventional computer. It’s about time that we talk about what this means.


Before we get to quantum supremacy, I have to tell you what a quantum computer is. All conventional computers work with quantum mechanics because their components rely on quantum behavior, like electron bands. But the operations that a conventional computer performs are not quantum.

Conventional computers store and handle information in form of bits that can take on two values, say 0 and 1, or up and down. A quantum computer, on the other hand, stores information in form of quantum-bits or q-bits that can take on any combination of 0 and 1. Operations on a quantum computer can then entangle the q-bits, which allows a quantum computer to solve certain problems much faster than a conventional computer.

Calculating the properties of molecules or materials, for example, is one of those problem that quantum computers can help with. In principle, properties like conductivity or rigidity, or even color, can be calculated from the atomic build-up of a material. We know the equations. But we cannot solve these equations with conventional computers. It would just take too long.

To give you an idea of how much more a quantum computer can do, think about this: One can simulate a quantum computer on a conventional computer just by numerically solving the equations of quantum mechanics. If you do that, then the computational burden on the conventional computer increases exponentially with the number of q-bits that you try to simulate. You can do 2 or 4 q-bits on a personal computer. But already with 50 q-bits you need a cluster of supercomputers. Anything beyond 50 or so q-bits cannot presently be calculated, at least not in any reasonable amount of time.

So what is quantum supremacy? Quantum supremacy is the event in which a quantum computer outperforms the best conventional computers on a specific task. It needs to be a specific task because quantum computers are really special-purpose machines whose powers help with particular calculations.

However, to come back to the earlier example, if you want to know what a molecule does, you need millions of q-bits and we are far away from that. So how then do you test quantum supremacy? You let a quantum computer do what it does best, that is being a quantum computer.

This is an idea proposed by Scott Aaronson. If you set up a quantum computer in a suitable way, it will produce probabilistic distributions of measurable variables. You can try and simulate those measurement outcomes on a conventional computer but this would take a very long time. So by letting a conventional computer compete with a quantum computer on this task, you can demonstrate that the quantum computer does something a classical computer just is not able to do.

Exactly at which point someone will declare quantum supremacy is a little ambiguous because you can always argue that maybe one could have used better conventional computers or a better algorithm. But for practical purposes this really doesn’t matter all that much. The point is that it will show quantum computers really do things that are difficult to calculate with a conventional computer.

But what does that mean? Quantum supremacy sounds very impressive until you realize that most molecules have quantum processes that also exceed the computational capacities of present-day supercomputers. That is, after all, the reason we want quantum computers. And the generation of random variables that can be used to check quantum supremacy is not good to actually calculate anything useful. So that makes it sound as if the existing quantum computers are really just new toys for scientists.

What would it take to calculate anything useful with a quantum computer? Estimates about this vary between half a million and a billion q-bits, depending on just exactly what you think is “useful” and how optimistic you are that algorithms for quantum computers will improve. So let us say, realistically it would take a few million q-bits.

When will we get to see a quantum computer with a few million q-bits? No one knows. The problem is that the presently most dominant approaches are unlikely to scale. These approaches are superconducting q-bits and ion traps. In neither case does anyone have any idea how to get beyond a few hundred. This is both an engineering problem and a cost-problem.

And this is why, in recent years, there has been a lot of talk in the community about NISQ computers, that are the “noisy intermediate scale quantum computers”. This is really a term invented to make investors believe that quantum computing will have practical applications in the next decades or so. The trouble with NISQs is that while it is plausible that they soon will be practically feasible, no one knows how to calculate something useful with them.

As you have probably noticed, I am not very optimistic that quantum computers will have practical applications any time soon. In fact, I am presently quite worried that quantum computing will go the same way as nuclear fusion, that it will remain forever promising but never quite work.

Nevertheless, quantum supremacy is without doubt going to be an exciting scientific milestone.

Update June 29: Video now with German subtitles. To see those, click CC in the YouTube toolbar and chose language under settings/gear icon.

62 comments:

  1. Wow!

    (And your comments are remarkable as well.)

    ReplyDelete
  2. In my opinion, the route to a reliable and rugged quantum computer will travel down the road paved by my favorite quasi-particles, the polariton. Polaritons originate from the coupling of light with matter at room temperature and beyond and that demonstrate quantum phenomena at the many-particle mesoscopic level, such as Bose-Einstein condensation and superfluidity. What has not been successfully mastered at his point is a genuine quantum manifestation of their dynamics at the single-particle level.

    This mastery of the detailed quantum nature of the polariton is something useful that a particle physicist can get their mathematical teeth into. Dealing with the polariton will expose the quantum computer developer to a real world gut level application of quantum mechanics. The polariton is not just a one trick pony, it has a broad range of other exiting uses coming down the pipe.

    ReplyDelete
  3. The Tal-Raz theorem

    https://www.intriq.org/uploads/Documents/Presentations/2018/Fall_2018/Avishay%20Tal.pdf

    illustrate how bounded quantum polynomial algorithms require fewer oracle inputs in PH. This means there does exist a class of problems that quantum computers can solve that are not as accessible to classical computers.

    ReplyDelete
  4. As you indicate, logical design and demonstration of machine computing existed decades or a century before the arrival of modern computer power. Modern computers were made possible by high purity doped silicon wafers and micro-lithography. Quantum computing may wait several decades for a suitable platform. In the meantime I wouldn’t want my pension fund to invest.
    Yes, one might hope quantum computing does not remain a technology of the indefinite future, like nuclear fusion. Although it would be a good thing if fusion did. Should easily constructed and cheap nuclear fusion become available we would have the waste heat pollution of a million reactors. Better to use renewables which add zero additional waste heat to the environment. (and learn a little frugality)

    ReplyDelete
    Replies
    1. Ishmael, interesting point about fusion reactor waste heat. Would it make any difference if those million fusion reactors were in space and the energy they produced was somehow moved from there to the earth?

      Delete
  5. I agree. I don't think big scale quantum computers are going to succeed, the problem with the noisy environment being the main obstacle. But then again, I am no expert, so I may certainly be proven wrong.

    PS: A tiny possible clarification in your text: "Estimates about this vary between half a million and a billion q-bits". Unit missing :-).

    ReplyDelete
    Replies
    1. i) I consider it plausible. So far information technology evolved according to Moore's law. As you said, quantum computers scale exponentially compared to classical computers. If you plug the exponential into Moore's law you get a double exponential law, Moore's for quantum computers if you will, which is Neven's law. (But we don't have to argue, in a few years we will know because a double exponential law will manifest itself in a drastic way like the growth of a tumor or the rapid dying away of people from a cerain age on).

      ii) I wasn't clear here. I just doubt that all the available quantum algorithms require at least millions if not billions of qubits for their implementation and that it takes so many qubits to do something useful.
      https://physics.stackexchange.com/questions/8134/how-many-qubits-are-needed-for-useful-computation

      Best,
      Markus

      Delete
    2. Markus,

      1) Moore's law requires constant drops of costs usually going along with making things smaller. We are already at the limit right now and it doesn't look like costs will be decreasing.

      2) I am talking about physical q-bits. The actual things that you need to have, not logical q-bits. If you doubt what I say, then please provide a reference. The reference that you gave agrees with what I say, not with what you say.

      Delete
    3. @Sabine
      1) Yes, it is also my take that Moore's law is a result of making things smaller. If that is so, I doubt that this new law holds.
      2) Thanks for that hint, I wasn't aware of this distinction.

      Delete
  6. I have the impression that money is going into this field. Very little reaches the investing public in the way of detailed descriptions of any hardware, but no one seems to notice this omission.

    I just did another search. Wikipedia had some useful info. Apparently multiple resets are employed to draw down a distribution, so the number of bits/qbits must be high.

    Nothing about how a problem would be loaded. The situation is somewhat reminiscent of the 1950's. We were hearing quite a bit about analogue computers, then one day everyone seemed to have forgotten that analogue computers had ever existed. (If in fact they had.)

    What the public gleaned from the informational releases on digital computers was that it was a form of magic. No one, of course, would use that word, but that's what it amounts to.

    I just spoke with someone who wanted to complain about computers. My friend uses them every day, but has zero comprehension. It is magic, and she resents them.

    That's it.

    I know many others like her. It is difficult to explain anything because nothing is based on any concrete reality: what digital data consists of, what a file is, how data is arranged on a disk, tape, etc., and above all why. What a CPU actually does! Knowing some of these things would dispell the mystery and make it easier get people to accept an understanding of the technology.

    At one point the hope was that the younger generation would grow up understanding computers. That was 50 years ago. I don't see much progress. The kids are too embarrassed to admit ignorance.

    Perhaps we should hope quantum computing takes a while.

    ReplyDelete
    Replies
    1. Walter Esler: digital data IS based on a concrete reality, as much as the number 538 is based on a concrete reality. You see a particular shape in some medium contrasting with a background (phosphers on a screen, graphite on paper, chalk or marker on a board), and your brain recognizes the pattern of contrasts as the numerals '538'. In computers contrasting mediums are also employed: the background is a wire or magnetic medium, the "marker" is a voltage or the polarity of a magnet.

      We manipulate these electrical marks on the metallic background, like chess pieces on a board, and ultimately interpret the final patterns as 'answers' or 'results'.

      Of course my generation's understanding of the hardware basics of digital computation have become largely irrelevant, nobody needs to know them any more than we need to understand the workings of a non-digital Swiss watch. If it stops working, we take it to a specialist, or discard it and buy another.

      I think the younger generation has grown up understanding 'computers' in the only sense that really matters, they know how to use them and what they can do as tools. Which is the same way we all understand modern cars, or microwaves, or televisions, or the Internet itself.

      No point in demanding everybody be electrical engineers that understand these things at the transistor level (or molecular level), just like we don't demand everybody be a medical doctor, pharmacist, metallurgist, farmer, electrician and car mechanic.

      The next generation doesn't need to know how it works to know how to work with it.

      Delete
    2. Dr. Castaldo: I used to do network/system admin. I think some understanding is helpful for users. It makes it less likely they will get in trouble with the computer and it makes it easier to work with them when something does go wrong.

      Delete
    3. Some at least elementary understanding of the various technologies we use is also arguably an essential part of any education toward full humanity. Obviously specialization is necessary, but if we surrender to it entirely, that way lies a kind of barbarism. I doubt it's healthy for so many people to be so completely out of touch with so many of the underlying realities around them. An essential part of being human is the drive toward universality. Undergrad education, besides laying the foundations for one or another specialization, should seek, if only in an elementary way, the unity between all the disciplines, the humanities and the sciences, and as part of that goal should acquaint students in some basic way at least with how things as pervasive as computers, for example, concretely work.

      However, I'm not proposing such an agenda should be imposed on anyone. Educational freedom for teachers and students seems to me more important than any particular curriculum.

      Delete
    4. Walter,
      I agree with Dr Castaldo. It's only the specialists that need to understand fundamentals. The vast majority of users have no interest in (or would gain anything except intellectual benefit from) an understanding of how e.g. a digital computer or GPS system works. Dr Castaldo uses the analogy of modern cars, another example would be a microwave cooker. Open door, shove in food, close door, press start... Job done. Maybe detailed instructions are sometimes needed - what sequence of buttons to press to defrost caviar or heat up yesterday's strudel - but this is application/use not basic science or engineering.
      I can't think of any example of mass use technology where the 'user instructions' (if they exist at all) are much more than 'how to use it'.
      There may be certain cases where a more detailed knowledge is useful for the average user. But in the vast majority of cases even at the network/system level we're almost always dealing with application and connectivity issues not the fundamental principles of the electronics and /or associated hardware. You, as a system engineer dealing with a router problem may have an interest in data packets and IP addressing, I just want to plug the damn thing in and get on to the internet!

      Delete
    5. Walter Esler: The reason we have specialists, and you are a specialist in network/sysadmin, is that we humans have limited mental capacities.

      For example, I am a specialist in certain kinds of code, but I am aware that the only reason I am is that I am immersed in my field most of my work day, I review academic papers, I read them, I'm a full time research scientist and I like my job. If you want to talk about that, I'm the guy. But if I stop that constant learning and reinforcement that keeps all the knowledge, algorithms and ideas at my mind's metaphorical fingertips, I'd likely be useless in five years or less.

      I work on at least five supercomputers, and I am a poor to mediocre sysadmin. I don't remember the commands and have to look them up, I don't troubleshoot network problems worth a damn, and I cannot retain a rudimentary understanding of what is going on because it is often weeks or months between instances when I need to remember it.

      Knowledge of the "basics" is seldom useful. Thirty years ago I served on the International Standards Committee developing communications protocols; I still recall more than the basics. But that is useless to me if a file copy between nodes keeps intermittently aborting. You need to go figure that one out, Walter, replace a card or a cable or something. Upgrade a driver. Light a candle and cast a spell.

      Great scientific pursuits, including medicine, flight, astronomy, physics, computer science, chemistry, genetics -- They don't get simpler, they get more complex in order to become even greater. They fracture into sub-disciplines when the knowledge required to be a specialist exceeds the human mental capacity.

      So now we have in Computer Science specialists in Security, high performance algorithms, Mathematical Software, Communications, Languages, Animation, AI, all sorts of specialists that don't know a lot outside their sub-discipline. Because there is too much to know and keep up with to maintain their own "specialist" competency.

      Delete
    6. Dr. Castaldo, RGT:

      You haven't been answering the service calls! "I keep getting a warning on my computer." What does it say? "You're supposed to know that."

      "My E-mails disappeared!"

      "All my edits disappeared!"

      There are some very basic ideas which would be helpful, if more widely disseminated. What a file is, what directories are, and why we have these things. (Because it's hard to find a given document, if our mass storage is not indexed. We could go on: What a word processor does. (It's not just a typewiter.) The explanations are quite simple. The reasons are practical. It would save a lot of misery.

      People are using computers to create things. That's where we are running into trouble. Many individuals adapted easily, but many are being left behind. I suspect it's impacting our economy.

      Computers are complex-state machines. Dealing with them is not simply a question of pushing buttons. Our technology has been advancing, but we have been neglecting our people.

      Delete
    7. @Walter Esler: You haven't been answering the service calls!

      Right, because that is not my job. I invent algorithms and new ways of solving problems. I notice you haven't been doing your fair share of inventing new algorithms, Walter. What's the problem?

      Walter: There are some very basic ideas which would be helpful,

      Helpful to whom? To you not getting as many calls? It would not be helpful to me, the easiest and most efficient thing for me is to email the sysadmin and get back on my whiteboard. Or go to work on my laptop or another machine (I've got four in my office). I can let her solve the problem, I didn't have to learn anything, I didn't have to research anything, he magically figures it out and tells me it's fixed and if I caused it somehow.

      And I'm not demeaning his job, we have a complementary partnership: I'm good at one thing that he would be terrible at, he's good at another thing I am terrible at, between us we advance science. Neither of us are good at administration, or payroll, or fund raising, and we have complementary partners for those tasks.

      That is what maximizes my productivity. My sysadmin relies on others to maximize his productivity, He doesn't compute his own paycheck or negotiate with benefits providers or other service providers, he doesn't pay the electric bills or deal with repairs or research legal issues.

      So it isn't just about maximizing my productivity, specialization is about maximizing our collective productivity, and it is particularly self-centered and short-sighted to think the world would be a better place if everybody learned a little something about your job and reduced the hours you have to spend solving common problems.

      You aren't looking at the big picture. There are dozens of specialties in large university, maybe more (it isn't my job to know). They exist for a reason, usually to know off the top of their head how to solve relatively simple problems, that are not relatively simple to those having the problem. Complaining about that system actually working is what is counter-productive, Trying to teach your customers how to do your job so you don't have to answer as many calls is counter-productive -- They will be less productive doing your job for an hour, as an amateur bumbling about their system, than they would be doing their job for an hour, as an expert.

      Embrace what you are good at, and find an environment where that is needed and the management is good about not burdening you with a load of duties requiring skills outside of what you are good at.

      Delete
    8. Dr. Castaldo: Actually I have been coding the last few weeks. In about a year I will learn if my efforts will generate cash flow.

      Delete
    9. We do need to understand how computers work because some idiots are claiming that, and warning the world that, consciousness can “emerge” from such things.

      Computers work on 2 levels:
      1. Just like everything else (cars, living things, atomic bomb explosions), computers process fundamental-level information like mass, charge and position, and their associated numbers. We have harnessed our knowledge of the way nature works to produce cars, atomic bombs and computers.
      2. The function of cars is to get you from point A to point B, but the function of computers is to process symbolic representations of information. E.g. computers can process a symbolic representation of an atomic explosion without exploding; and in the same way, computers process symbolic representations of conscious information (words, sentences, equations, patterns) without being conscious.

      With computers, it is necessary to distinguish the difference between 1) information and 2) symbolic representations of information.

      Delete
  7. "You can try and simulate those measurement outcomes on a conventional computer but this would take a very long time."

    It's nothing astonishing. If you simulate (emulate) on a personal computer another computer (let’s say a zx spectrum) you will need much more effort for doing a program than the zx spectrum self.

    Now the pc will work at very higher frequency, so the pc is faster. But that cannot the point. If the zx spectrum would task at the same frequency as the pc, the spectrum would do his work faster than the pc, because an emulation always is more complex than native programs.

    So the given argument is not stringent in my view.

    ReplyDelete
    Replies
    1. weristdas,
      You imply that the modern device runs so much faster than the ZX that an efficiently programmed emulation is likely to be faster. But then you argue that if the emulation is deliberately made inefficient it will run slower. Which is rather like saying an Austin 7 (1930) is as fast as a Ferrari formula 1 car providing the ferrari driver keeps their foot on the brake.
      Even so, if your basic premise (emulations run slower) is taken at face value this implies that attempting to emulate a highly complex set of operations is likely, as Dr Hossenfelder asserts, to "take a very long time" and it's not clear why you disagree with her argument.

      Delete
  8. I'd like to oppose a bit to your notorious pessimism:

    i.) If Neville's law holds, quantum computers (QCs) do scale very quickly and a few million or billion qubits are just around the corner.
    https://www.quantamagazine.org/does-nevens-law-describe-quantum-computings-rise-20190618/

    ii.) We know how useful or useless QCs are because we have an exhaustive list of (quantum) algorithms
    http://quantumalgorithmzoo.org/

    iii.) Fusion is the future if you like it or not.
    https://techcrunch.com/2019/06/27/a-boston-startup-developing-a-nuclear-fusion-reactor-just-got-a-roughly-50-million-boost/?guccounter=1&guce_referrer_us=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_cs=80u-6UXe72HZHl2vReypgA
    Also, just recently there has been good news about better magnets.

    ReplyDelete
    Replies
    1. Markus,

      i) We have no reason whatsoever to think that this law "holds". I do not have the faintest idea why anyone would think so.

      ii) Yes, and how does that contradict anything I said?

      iii) Sure.

      Delete
    2. Markus, (iii) That article is a lot of hype about something that doesn't exist; the only "boost" was somebody gave them $64M to try to scale up, and it isn't "also good news about better magnets", it is the same project as the article talks about.

      Talking investors into something is certainly a feat, but if investors could reliably tell the difference between what will work and what will flop, we wouldn't see venture capitalists failing in 75% of their ventures!

      (See Inc. Magazine article quoting a study by Harvard Business School senior lecturer Shikhar Ghosh; https://www.inc.com/john-mcdermott/report-3-out-of-4-venture-backed-start-ups-fail.html)

      Big time investors are not experts, they are gamblers, quietly killing 75% of their bets and loudly advertising their genius for the 25% that pay off. Rightly so, the payoffs can be wildly asymmetric, sometimes 10x their investment. But do not assume because they invest a large amount their decision is based on anything real.

      Fusion is The Future and Always Will Be! (In the future.)

      Delete
    3. I am not optimistic or pessimistic about this. The arrival of the quantum computer will make it possible to solve a huge array of problems. The quantum computer will become a tool for looking at quantum gravitation theories and models. In general we can model quantum systems and solve how the minimal configuration of molecules are “quantum computed.” The arrival of the quantum computer will also mean that if you think we have a complicated world now, you have seen nothing yet. The great thing about computers is that they allow us to screw things up much faster than we could before. The quantum computer will turn the RSA encryption world on its head and the problems with computer security will become far more complicated.

      If there is one thing we humans are good at doing it is making our condition ever more complex.

      I suspect that in the average computer, one on a desk, notebook or even cellular, will have an array of different processors. I think the von Neumann processor will be at the core. Even if there is a quantum processor then to follow the dictum of Niels Bohr the output will depend on a classical system. So this computer will have an array of other types of processors, say quantum annealing, quantum CMOS such as this, maybe a single atomic quantum processor, along with neural networks, Boltzmann machines and so forth. This may come in just the next 10 years. Then heap on top of this ever more advanced AI algorithms.

      Comparing this to fusion …, we may have fusion energy at some point in the future. We can't predict when or how. I am not sure how one can compare the scaling of fusion technology and that of quantum computing.

      Delete
    4. I wonder if someone will come along and somehow prove that a net positive nuclear fusion reactor is impossible in principle, not just impossible for current technology.

      Delete
    5. It is obviously possible in principle, since all stars do it.

      Delete
    6. @Castaldo
      Very interesting, thanks. I didn't realize that the magnet projest coincides with this project.
      I recently read about a university ranking where MIT was number one. So I thougth that it is a good sign that it was people from MIT who came up with a new appoach to nuclear fusion.
      Yes, investors sometimes do strange things, like invest in Theranos, Brilliant Light Power, or - who knows - maybe even cold fusion :-)

      Delete
    7. The subjective value of an investment in the mind of the investor. The objective value is the results that the investment produces. It is this tension between what is expected and what is achieved that makes for a market.

      Delete
  9. Not to mention that the time you have to operate your quantum computer until decoherence scales like exp(-N) where N = number of qbits, so even if you could build your million qbit computer you could only solve problems that can be computed in a nanosecond or so of computation time.

    ReplyDelete
    Replies
    1. Quantum fault tolerance is supposed to take care of that. If you have a few tens of millions of qubits that decohere in millisecond times, and if your gate times are 100 nanoseconds, then you should be able to do some very useful computations that maintain quantum coherence for hours or days, or even years. (Assuming the fault tolerance techniques we have work, and they've been proved to work under quite reasonable assumptions.)

      Delete
    2. Peter Shor:

      Sounds as if you think it can really work, and you are an actual expert in the field. What is your gut feeling for the timescale by which a system such as you describe will really be built?

      Thanks.

      Dave

      Delete
    3. I think we'll get a few hundred to a few thousand physical qubits in the next 10 years. Millions of qubits might (or might not) run into really difficult engineering problems. We certainly don't know how to solve them right now, so if you're a pessimist you can say "never". But if you're an optimist you can say "15 to 20 years".

      Delete
  10. What is interesting is this is called a cryogenic CMOS qubit controller. A CMOS has p junctions in n-wells or visa versa n junctions in p wells. A MOSFET uses a voltage at the gate to adjust a potential between the source and drain. If this is quantum mechanical then I am presuming they are thinking of this as a quantum barrier. I am also presuming that since this is a cryrogenic and this is larger than an atom that this is something similar to a Josephson junction. The P and N regions then behave as Josephson junctions and this employs that as some field effect.

    I might be wrong but I sense that the device employs quantum overcomplete states or condensates, which make them somewhat resilient to decoherence. CMOS and similar devices are not down to the size of an atom and as such I think this then employs condensate states that are classical-like and relatively stable. This is then a sort of “quantization on the large,” which helps to get around some of the decoherence problems.

    ReplyDelete
  11. Sabine, Could QC be a suitable technology to reach some day a kind of computed simulation in reference to the Nick Bostrom paper "ARE YOU LIVING IN A COMPUTER SIMULATION?": https://www.simulation-argument.com/simulation.pdf?

    I.e.; could a "big" enough QC system be able (powerful enough) to compute a simulated world with the properties Nick defines in his paper (and taking us so to the "posthuman" stage)?

    In other words, would a QC be more suitable to develop an accurate simulation of our own world than a classic computer? and if you think the answer is yes, how many q-bits (physical or logical ones) would be then necessary to reach the simulation requirements?

    ReplyDelete
  12. The primary hang-up with quantum computing is that quantum states are fundamentally very sensitive to their surroundings. Today, the current approach to quantum computing technology in which the quantum computer is currently being staged is associated with the microscopic world, where extremely small length scales are used to generate quantum phenomena.

    But quantum effects tend to disappear for macroscopic objects. These effects become smeared out into their classical averages, and thus quantum computing also becomes impossible. There is an exception to this rule. Bose-Einstein condensates (BECs) are one exception where quantum effects are visible on a macroscopic level.


    In a quasiparticle based BEC (QP-BEC), the individual members of the ensemble will suffer all the vagaries of the quantum world internally but the BEC itself lives by more advantageous quantum rules.
    The BEC wavefunction displays all the desirable quantum properties as do microscopic quantum particles such as superposition, entanglement, and tunneling, but on a macroscopic scale where the killer properties such as decoherence is controllable through the force of large ensemble member numbers.

    A BEC friendly quantum computer backplane of arbitrary size based on the formation of a quasiparticle based BEC population can operate at or above room temperature utilizing a structured surface upon which these BECs can form, and were energy pumping can sustain the non-equilibrium QP-BEC qubit population indefinitely.


    ReplyDelete
  13. Analog quantum simulators based on QC technologies might very well prove useful long before we get to 10^6 qubits.

    ReplyDelete
    Replies
    1. Yes, but "useful" for what? Useful so that you can money with it? Not unless you make money by renting out your devices to scientists.

      Delete
    2. A system of some hundreds of "good" qubits would be a very useful tool for studying highly correlated condensed matter systems, for example. They could help in developing high Tc superconductors. I'm not saying they will, but it is a realistic, somewhat near term application. Is that enough? We'll see, but the prospects are better than fusion IMO.

      Delete
    3. Please provide a reference for that claim.

      Delete
    4. Well, look for example at this review: https://doi.org/10.1038/nphys2259 or this article: https://doi.org/10.1038/nature23022

      As for the number of qubits / atoms / spins, the number I picked is arbitrary, but the point is that when we get beyond the regime of what's classically tractable in terms of simulating say the Fermi-Hubbard model, we can learn a lot about these systems.

      Delete
    5. Fulis,

      I am familiar with these papers. As I said, these are useful/interesting for scientists (or more generally maybe for people who are academically curious). Also note that this is a topic I have written about many times and that I do as a matter of fact work on myself. What I am saying is that these simulation have no practical use. In case that still wasn't clear enough: these are not the reason business investors pour hundreds of millions into quantum computing.

      Delete
  14. In electronic computing, it was recognized early on that there was an absolute requirement to implement an interface layer between the hardware level and the software level. This interface layer was called firmware. Over time, as the speed and capacity of the CPU became evermore powerful, additional abstraction levels were implemented to minimize the cost of software development and to also provide a plug and play method to enable the seamless insertion of any number of differing hardware and firmware designs that could run the same software programming suite without changing the high level logic of any given application.

    It is a mistake to assume that a given hardware and/or firmware architecture will become a standard approach that will dominate the upcoming quantum computing development cycle.

    In detailed terms, the mechanism for removing decoherence errors through error correction is rightly positioned in the hardware and/or firmware abstraction levels. What I have not seen is an open approach to producing quantum computer multi leveled software architecture that provides the necessary levels of abstraction that will make the quantum computer of any value.

    If the past is prolog, the amount of raw quantum computer processing power could be substantial to provide sufficient software abstraction in the quantum computer arena. It is also premature to talk about quantum supremacy when there is such a wide mismatch between the hardware/firmware/software overhead that is embedded unseen and unrecognized in electronic computation as compared to and equivalent level of not yet developed quantum computation abstraction.

    ReplyDelete
  15. As Donald Rumsfeld might say: You compute with the computers you have, not with the [theoretical] computers you don't have.

    And don't forget about biocomputers (made of biological materials).

    ReplyDelete
  16. Hi SABINE, !!!

    - and friends.

    I hope everyone's day is going well.
    For myself, personally,
    I just finished reading a
    critique of Steven Hawking's
    last paper.
    It was done very well.
    It was done by a very adept
    Scientist/physicist/mathmetician

    ... he ripped it apart.
    - and upon my review,
    I found the critique
    - correct.

    Ah, I had higher hopes.

    My day started off well,
    (early and with some energy)

    Then it devolved... lol
    Maybe it's the heat;
    I don't know.but if I look around a bit ( Q-bit lol)
    I should find something
    on which to level some
    blame . lmfao !

    This ' hoopla' about
    quantum computers ..?
    ... Really ?

    If time permits
    I'll comment
    on the Morrow.

    for now,

    Gute Nacht.

    All Love

    Love Your Work.

    ReplyDelete
  17. In my opinion, the reason why quantum computing and nuclear fusion are forever promising but never quite work is that there is an arrogant supremacism in science. There are incorrect theoretical models that science thinks they are absolutely correct. We are missing something, we still do not know accurately how nature works, but we believe we do. False modesty and supremacism are not good tools for progressing in science. Nuclear fusion doesn't work because probably we have flawed theoretical models, but scientists believe they are not flawed or incorrect, so they do nothing for correcting or improving those theoretical models. The same can be said about quantum computing, and other issues and anomalies in science.

    Supremacism in science is the belief that a certain class of theories are dominant, superior to others, and that they should dominate, control, and subjugate others, Those supremacist theories are the so called mainstream consensus, a giant obstacle to the progress of science and technology, IMHO.

    ReplyDelete
    Replies
    1. Nuclear fusion works, and the way it works is perfectly compatible with the theoretical expectation. The problem is that the technology is does not have a positive energy output.

      A much bigger problem than "supremacism" in science is people who go around and make big proclamations about things they evidently do not understand.

      Delete
    2. It is a bit strange this issue has turned into one about fusion. The big problem with fusion technology I think stems from the source of confinement. A star confines a plasma by gravitation, where in effect the confining force is “from within.” We can also interpret the “force” as really due to the thermal pressure in the plasma that resists the geodesic flow of gravitation. Our approach to fusion is to apply a force from the outside onto a plasma. This is whether the experiments are inertial confinement or magneto-hydrodynamic confinement in Tokamaks. The plasma exhibits Rayleigh-Taylor instabilities that self-amplify and destroy the plasma. For this reason the plasmas in fusion experiments take both a lot of input energy to generate and last a short period of time.

      My coffee maker went out a few days ago and I have not gotten around to buying another. I am then reduced to making so called “cowboy coffee,” where the coffee is placed in low boil water. It is interesting to watch the coffee foam flow, for it exhibits rapidly changing fractal patterns and chaos. Hydrodynamics is nonlinear and the Navier-Stokes equation has a velocity in a product with differentials of velocity. With plasmas you couple this with Maxwell's equations and things get really crazy. With confinement of a plasma, either by inertial means or with a magnetic field there is a boundary layer that easily exhibits instabilities. With gravitation, particularly if the star is fairly large, the thermal pressure of the plasma resists geodesic flow, which because of Birchoff's theorem that defines conditions between the exterior and interior spacetime for a distribution, usually spherical, of matter. This tends to prevent run-away turbulent or chaotic conditions. With confinement there are no such conditions.

      Delete
    3. Lawrence Crowell: That is an interesting argument, I have not heard it before. I suppose, then, in a fusion "thermonuclear" Hydrogen bomb, they have solved these problems. Perhaps the way forward for fusion energy is in scaling that technology down from a hydrogen bomb to some kind of miniature hydrogen bombs; and developing some way to capture the energy produced by the explosions. Direct it to do some massive amount of work very quickly.

      Delete
    4. @Dr. A.M. Castaldo,
      Well, for hydrogem bombs they didn't really solve these problems. That is because they don't actually need to solve them. There is no need for confinement, on the contrary. You want a short burst of immense energy for maximal damage. So what you want to achieve with H bombs, is to release as much energy as possible in one event, which is relatively "easy". With nuclear fusion reactors, you want to release as much energy as possible, but continuously over an as long as possible period of time, hence the confinement.

      Delete
    5. Inertial confinement is as sort of laser induced hydrogen bomb. This works because the confinement is very transient. The big issue from what I understand is that to make this work the DT pellets have to be quite large and the burst of fusion energy released is rather powerful. So this has scaling issues.

      Tokamaks and magnetic mirror systems are intended to be less transient, but there plasma instabilities are very difficult. One thing that would really make the magnetic mirror problem work like a charm is if a magnetic monopole could be generated. This would really be great if a magnetic monopole with a serious field could be generated. That would eliminate the frustration of there being two holes in the magnetic bottle. Of course Tokamaks wrap the bottle around into a torus to connect the holes, but a poloidal field is needed to keep the field constant. Getting that to work right is very tricky.

      There are two types of physicists who seem the most prone to acting insane. The first are those into the foundations of quantum mechanics, which I am somewhat interested in. I try to keep my sanity by only considering quantum foundations on Mondays, Wednesdays and Fridays, while the rest of the time I stick to Mermin's "shut up and calculate" or Bohr's admonitions. The other quirky types are plasma physicists. Plasma physics has no real world exact solutions. The standard exact solution is a plasma in an infinitely long solenoid. Other than that the subject defies analysis. It is a bit like trying to make sense of quantum interpretations --- you can't.

      Delete
    6. Maybe the problem is that it is just insanely difficult? As Sabine said, we have no indication that the theories you are talking about are wrong. There's just so many things to take into consideration.

      For nuclear fusion for example, you are talking about the equations of magnetohydrodynamics. These are highly nonlinear. With small changes you could get from one flow regime to another completely different. You need to confine a very hot plasma, and in order to do so, you need to find a regime in the MHD equations where it is going to work and be stable, instead of introducing instabilities that will mess everything up. And then you have to find a way to reach that regime, which can be very hard because of stringent constraints. And then you have to find a way to scale it up, so that you can actually use nuclear fusion as an energy source. The challenges are huge.

      Parallels could be drawn for quantum computing. The rule of thumb is simple: the more complex the system you are playing with, the harder it is to make applications out of them. And this rule is obviously not linear, hence it can get ugly real fast. This has nothing to do with "superiority".

      Delete
  18. This comment has been removed by the author.

    ReplyDelete
  19. Thanks for the explanation of quantum computing; I'd had a sort of a handle on it but not, it seems, as fully as I had thought.

    And not on topic, but I really liked your words to students in response to the young man who took his life. Instructors can communicate knowledge in a field or information, but my view is that teachers know they have a greater responsibility as human beings guiding other human beings, and they answer that call.

    ReplyDelete
  20. Oh, what the heck:
    The TRACI Hypothesis

    TRACI = Time Reversals Are Computationally Inaccessible

    I don't think our understanding of the interactions between information-rich classical physics and quantum entanglement is sufficiently precise yet. More specifically, in situations where special relativity allows some frames to interpret "instantaneous" resetting of a highly delocalized wave function as a subtle way to alter the past in some regions of that wave function, we will eventually discover that special relativity is even stricter than we thought: SR ensures that causality itself is relative to frame.

    That is, if each frame is interpreted as a universe spanning "game" of interacting space-like pieces, every frame will eventually produce exactly the same history of the universe, regardless of how those frames look at the quantum entanglement level.

    I call this causal symmetry.
    If valid, causal symmetry would still allow entanglement to affect causal history, e.g. via quantum encryption, but it would not allow impacts that in any way affect classical information in the past of any SR frame. That would allow causality in one frame to dominate over that of others and thus break causal symmetry. With regards to information, anything that even whiffs of time reversal becomes verboten.

    That's important for quantum computing, because if you take away the easily classically emulated multi-state wave processing component of quantum computing algorithms, their only truly critical difference is in how they use quantum entanglement to process real information in ways that transcend classical time constraints.

    With TRACI I am asserting that SR in general and causal symmetry in particular always wins. Thus for any process that begins and ends with classical, historical information, quantum computing will never do better than fully classical (and much easier) multistate wave based computing.

    Oddly, I sincerely hope my TRACI hypothesis is wrong. I would love for QC to be real!

    Two tangential notes: (1) If causal symmetry is correct, Bell's inequality will have... well, structure? A kind of frame-dependent behavior that would require careful data collection indeed to verify. (2) With causal symmetry the concept of a block universe becomes unnecessary.

    ReplyDelete
  21. Hi SABINE, !!!
    and Friends.

    I hope everyone's day is
    going well.

    I'm not at liberty to
    speak much in detail.
    However, perhaps I can
    (indirectly) state a
    'well known' fact .
    - The actual level
    of technological advancement
    extant on this planet..
    at this moment.
    - is not in
    the public domain .

    (nor is it available to)

    Love Your Work.

    (sometimes I have to try) ;(

    Oh,and Sabine,
    I had a brief moment
    (in reading)
    to enjoy some
    Brilliant discourse
    between some very astute
    minds.
    Thanks to You.

    All the best

    All Love,

    ReplyDelete
  22. Hello, Sabine et al!

    1. I'm not a physicist
    2. I just have a question: why do some scientists...

    ( Max Tegmark was one such back in 2014
    https://youtu.be/bJpIclDmi2M @ 11:00)

    ...think the fate of q-computing will either prove or disprove the existence of parallel multiverses?

    if you don't answer i will call into Dr Kaku's radio show 😊

    KC

    ReplyDelete
  23. If super determinism pans out will it affect quantum computing?

    ReplyDelete
    Replies
    1. That's an interesting question, but I am afraid the answer is model dependent. In my model it won't.

      Delete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.