On reducing the cost of breaking RSA-2048 to 100,000 physical qubits

So, a group based in Sydney, Australia has put out a preprint with a new estimate of the resource requirements for Shor’s algorithm, claiming that if you use LDPC codes rather than the surface code, you should be able to break RSA-2048 with fewer than 100,000 physical qubits, which is an order-of-magnitude improvement over the previous estimate by friend-of-the-blog Craig Gidney. I’ve now gotten sufficiently many inquiries about it that it’s passed the threshold of blog-necessity.

A few quick remarks, and then we can discuss more in the comments section:

  • Yes, this is serious work. The claim seems entirely plausible to me, although it would be an understatement to say that I haven’t verified the details. The main worry I’d have is simply that LDPC codes are harder to engineer than the surface code (especially for superconducting qubits, less so for trapped-ion), because you need wildly nonlocal measurements of the error syndromes. Experts (including Gidney himself, if he likes!) should feel free to opine in the comments.
  • I have no idea by how much this shortens the timeline for breaking RSA-2048 on a quantum computer. A few months? Dunno. I, for one, had already “baked in” the assumption that further improvements were surely possible by using better error-correcting codes. But it’s good to figure it out explicitly.
  • On my Facebook, I mused that it might be time for the QC community to start having a conversation about whether work like this should still be openly published—a concern that my friends in the engineering side of QC have expressed to me. I got strong pushback from cryptographer and longtime friend Nadia Heninger, who told me that the crypto community has already had this conversation for decades, and has come down strongly on the side of open publication, albeit with “responsible disclosure” waiting periods, which are often 90 days. While the stakes would surely be unusually high with a full break of RSA-2048, Nadia didn’t see that the basic principles there were any different. Nadia’s arguments updated me in the direction of saying that groups with further improvements to the resource requirements for Shor’s algorithm should probably just go ahead and disclose what they’ve learned, and the crypto community will have their backs as having done what they’ve learned over the decades was the right thing. Certainly, any advantage that such disclosure would give to hackers, who could take the new Shor circuits and simply submit them to the increasingly powerful QCs that will gradually come online via cloud services, needs to be balanced against the loud, clear, open warning the world will get to migrate faster to quantum-resistant encryption.
  • I’m told that these days, the biggest practical game is breaking elliptic curve cryptography, not breaking Diffie-Hellman or RSA. Somewhat ironically, elliptic curve crypto is likely to fall to quantum computers a bit before RSA and Diffie-Hellman will fall, because ECC’s “better security” (against classical attacks, that is) led people to use 256-bit keys rather than 2,048-bit keys, and Shor’s algorithm mostly just cares about the key size.
  • In the acknowledgments of the paper, I’m thanked for “thoughtful feedback on the title.” Indeed, their original title was about “breaking RSA-2048″ with 100,000 physical qubits. When they sent me a draft, I pointed out to them that they need to change it, since journalists would predictably misinterpret it to mean that they’d already done it, rather than simply saying that it could be done.

58 Responses to “On reducing the cost of breaking RSA-2048 to 100,000 physical qubits”

  1. Sam Says:

    A naive question, but so long as we’re talking about the title, are they using the word “pinnacle” because it sounds cool or because there is some reason to believe that there are no more such large improvements left to be found?

  2. P_CNot Says:

    One of the most exciting aspects of this work for me is the parallelization of the algorithm at the gate level, allowing for serious gate reductions.
    It will be interesting to see if this is a Shor-specific trick, or if we can get similar reductions with other algorithms. This highlights physical compilation of circuits (often overlooked in quantum algorithm design), as a promising area to find algorithm optimisations.

  3. Scott Says:

    Sam #1: They explicitly say in the paper that they think there are further improvements to be found, so it can’t be that.

  4. Larry Says:

    Sam #1: it’s a type of iceberg!!!

  5. Timo Says:

    Sam #1, Scott #3: They are a company called Iceberg Quantum, and they decided to name this architecture after a particular type of iceberg. Google Quantum AI names its processors after trees, and IBM Quantum after birds, so here is another one.

  6. kodlu Says:

    Does anyone think a 90 day responsible disclosure period is enough for breaking, say, the Elliptic Curve Discrete Log problem? What would be a more reasonable timeline, I wonder.

  7. Viktor Dukhovni Says:

    I’m waiting for something much more modest, say application of the full Shor’s algorithm (with no special-case shortcuts based on already knowing the answer) to a 32-bit semiprime (product of two 16-bit primes). For example, recovery of the private key of the below public key:

    —–BEGIN PUBLIC KEY—–
    MCAwDQYJKoZIhvcNAQEBBQADDwAwDAIFAM4qmk8CAwEAAQ==
    —–END PUBLIC KEY—–
    Public-Key: (32 bit)
    Modulus: 3458898511 (0xce2a9a4f)
    Exponent: 65537 (0x10001)

    This is of course trivial classically:

    $ time factor 3458898511
    3458898511: 56591 61121

    real 0m0.003s
    user 0m0.001s
    sys 0m0.002s

  8. Raoul Ohio Says:

    A related note of likely interest to most SO readers.

    Ars Technica has retracted an article for being partly AI generated. The article was about AI agents trashing the reputation of a Python code base maintainer for rejecting code for being AI generated:

    https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/

    You can’t make this stuff up.

  9. Olivier Ezratty Says:

    Scott, with so much progress going on in the QEC-FTQC realm, don’t you think that the critical path to break RSA or other keys is the hardware?

    With the Pinnacle architecture, we need to consider a reasonable time for breaking RSA. For superconducting qubits with 99.9% fidelities, it’s 471K physical qubits for one day, 151K for a week and the 98K level is for one month of computing. If you reason in “qubits-hours”, it is only half of Gidney’s 2025 estimate, but 6% when you consider 471K qubits, due to some parallelization effect. To break RSA-2048 keys with slows cold atoms and trapped ions who benefit from good connectivity, at the cost of low speed operations, you need between 1.3M to 58M qubits depending on their fidelities. That’s a lot!

    I still wonder how we’ll reach 99.9% or 99.99% at scale for any qubit modality. Crosstalk and frequency crowding is pushing IBM to interconnect 120-qubit chips while Google/Qolab still bet on creating large chips of 10K qubits, which will still need to be interconnected (5 to 10 of them to break RSA-2048 in a month). But with 99.9% with non-local connectivity? The first IBM experiments are encouraging, but not reaching that level. In an X thread, Craig Gidney explained that these were not crazy assumptions and that what was crazy in 2019 is not anymore crazy in 2026. Most advances in the last 5 years were limited in scale (ions) and in variability (superconducting).

    On trapped ions, we’re all waiting for IonQ/OI’s 256-qubit chip actual fidelities. Will it be 99.99% like with their 2-qubit 2025 experiment? Most experimenters who know about microwave drive in this kind of chip think it is going to be a huge challenge.

    Obtaining good fidelities at scale and with qubit connectivity beyond nearest neighbors still seems the biggest challenge for all qubit modalities. And if vendors succeed here, which may happen, it will certainly take more time than anticipated.

  10. Eli Says:

    Thanks Scott and blog community for the discussion. This forum continues to be a very valuable corner of the internet as check and balance to all the quantum hype.

    For the less technical people, if this was able to distill / simplify the various enablers (especially hardware) that such progress assumes – I think that could be very helpful.

    Please keep up the very valuable commentary. There is going to be even more unsubstantiated hype out there over the coming years imo, so becomes very hard to figure out what inputs are reliable.

  11. Carey Says:

    Raoul #8: I’m inclined to cut them a _very_ limited amount of slack due to the responsible party having been in a literal fever state at the time. I’m more concerned that the organization didn’t reflexively give him time off; while I’ve been lead to understand that America has a very poor set of traditions and expectations in this regard, it would have entirely prevented this episode.

  12. Chris Vale Says:

    Does this improvement apply generally to all quantum computing applications or is it just for Shor’s?

  13. Scott Says:

    Chris Vale #12: Fault-tolerance based on LDPC codes could be used with any quantum computation, and promises broad efficiency gains over the surface code if you can get it to work, but the analysis and optimizations in this paper are specific to Shor’s algorithm.

  14. Scott Says:

    Olivier Ezratty #9: Yes, clearly a great deal remains to be done on the hardware, and no, I don’t know (and have never claimed to know) how much longer it’s going to take. Of course, the uncertainty cuts both ways, and is not grounds to be complacent about migrating to post-quantum crypto.

    In trapped ions, I’ve felt the most optimistic lately about Quantinuum, which you didn’t mention, but which seems like it could just scale the traps and the junctions between them for quite a long time while maintaining the all-to-all >99.9% fidelity that they demonstrated in their Helios chip. Whether IonQ/OI can overtake that will depend on whether they’re able to demonstrate OI’s 99.99% fidelity 2-qubit gate in the context of a larger programmable system. With superconducting qubits, I agree that an engineering innovation will be needed to get multiple chips (perhaps in different dilution fridges) to work well in concert, and that that’s a key piece that hasn’t yet been demonstrated. Then there’s QuEra’s neutral atom setup as a relative “dark horse,” which lags a little bit behind right now in 2-qubit fidelity but seems to have superb scaling behavior.

  15. Victory Omole Says:

    Olivier Ezratty #9

    > To break RSA-2048 keys with slows cold atoms and trapped ions who benefit from good connectivity, at the cost of low speed operations, you need between 1.3M to 58M qubits depending on their fidelities. That’s a lot!

    The Pinnacle architecture estimates that 13 million physical qubits will be needed to break RSA‑2048 using neutral atoms. At the moment, there is a realistic path to trapping more than 100,000 atoms in a single array without networking (see https://www.nature.com/articles/s41586-025-09961-5). A lot of R&D will be required to control that many atoms, so we’ll see how that goes.

  16. notGPT Says:

    This happened just in time for a philosophical question I had for you. I think Shor’s algorithm is a (the) key reason I feel the vast exponentiality of QM in my bones. For me I think it has to do with the classical simplicity, practicality and naturalness of the problem. Also since a lot is known and lot of people failed to find a polynomial algorithm for factoring. I think the reason QC initially came into limelight after so much time, also had to do with similar reasons, everyone turned their heads because factoring actually is supposed to be hard.

    Now for my question, if Shor’s algorithm wasn’t known or wasn’t efficient for QCs, from your point view what other algorithm would be equally convincing of the exponential resources being shuffled around by the universe? Or would it be the case that, there just isn’t anything else that could quite reach to the level of factoring in terms of strongly affecting the mind. All other known problems that have exponential speedup (HHL, Topological analysis etc.) would be considered somehow less natural.

    So what next algorithm/theorem (MIP*=RE)/games would be able to fill in place of Shor and convince you of the same exponentiality in your bones? Or is there none that would be equally convincing quite at the same level? Or as Deutsch says, you would need nothing but the double slit experiment itself, to truly feel the exponentiality in your bones!

  17. Scott Says:

    notGPT #16: It’s an excellent question. If it hadn’t been for factoring and related number theory problems, I think we would’ve fallen back on the original exponential quantum speedup from the 1980s—namely, simulating quantum mechanics itself—as well as various oracular separations, assuming that those still exist in your hypothetical. That would take us to the situation that quantum computing was actually in, circa 1993 or early 1994, after Bernstein-Vazirani and Simon but before Shor. At that time, some experts had started to get excited but QC didn’t have nearly the level of popular interest that it acquired after Shor’s algorithm.

  18. Craig Gidney Says:

    > Experts (including Gidney himself, if he likes!) should feel free to opine in the comments.

    I agree with the assessment that your mileage from this paper depends entirely upon how much you’re willing to tolerate the additional demands they are making of the hypothetical hardware.

    I have various issues with the paper (e.g. they assume the same decoder reaction time but have a much harder decoding problem), but in a sense it’s sort of irrelevant. Even if this paper’s details were all wrong, the overarching point that non-nearest-neighbor-connections are a resource that can be used to substantially reduce qubit overhead would remain correct. The question is only whether the tradeoffs you have to make to get that benefit are worth the cost.

  19. David Says:

    > I’m told that these days, the biggest practical game is breaking elliptic curve cryptography, not breaking Diffie-Hellman or RSA.

    Category error? Diffie-Hellman is a “protocol” that needs a particular kind of group to operate on. There are two ways to make such a group in today’s crypto: finite-field based (what Diffie and Hellman used) and elliptic-curve based (what most people use today).

  20. Scott Says:

    David #19: Yes, when I wrote “Diffie-Hellman,” of course I meant the original version over Zp×, as opposed to the elliptic curve version.

  21. notGPT Says:

    To clarify my weird counterfactual math world here, imagine we have all developments of quantum computing until today. Expect for Shor’s algorithm and related group problems have been taken away. (Or because of some miraculous coincidence nobody thought to write the paper on it)
    Would the best candidate to really get convinced of the exponentiality then be simulating QM itself (MIP*=RE or anything else wouldn’t)? Also would that be equally convincing as Shor’s algorithm in your mind?

    I guess the reply would be it’s more likely that, factoring is in P than P=BQP. But I guess I just feel that simulating QM itself (or generic QM situations that arise naturally in our universe) is slightly more contrived than factoring or may just have some sneaky polynomial algorithm.
    More contrived in the sense that, nobody defined BQP while studying classical questions like P≠NP. Or is it that after thinking about quantum mechanics long enough, BQP is as compelling in your mind as factoring would be of independent interest to anyone doing math.

    Maybe another way to say the question would be, if someone really doesn’t believe nature actually is shuffling around exponentially more resources and also they don’t find Shor’s algorithm impressive (maybe because it’s a single example or they think factoring is in P or quasi polynomial) what else would really convince them to similar level as factoring of the exponentiality?

    Anyhow, another weird question came to my mind writing this. Before learning about QM, if you were told factoring can be done in polynomial time in some universe, but not by ordinary Turing machines vs there was another universe where time can be stimulated in sqaure root space. And you were given the choice to live in one of them, which one would you choose? Just a roundabout way of asking which would be more surprising.

  22. Titi Says:

    Scott #17

    “I think we would’ve fallen back on the original exponential quantum speedup from the 1980s—namely, simulating quantum mechanics itself”

    The thing about “simulating QM itself”, what would be the typical output of such a simulation?
    If we’re talking about sampling a complicated density probability, don’t we need to run the simulation an exponential number of times to characterize it with enough detail?

  23. Scott Says:

    notGPT #21: I’ll let other comments riff further on your interesting musings if they want, but just as a quick comment — I think that MIP*=RE, while a spectacular technical achievement, doesn’t directly address the issue of quantum computational speedup, since it’s about a physically unrealistic model with non-communicating, arbitrarily entangled, ultra-powerful provers, not what you could actually do with a QC.

  24. Scott Says:

    Titi #22: A typical output of a quantum simulation would the approximate expectation value of some particular observable of interest: for example, the rate of a chemical reaction, a scattering cross-section, etc. We typically wouldn’t ask for the full probability distribution over outcomes precisely because it’s so enormous just to write down.

  25. Bo-gyu Jeong Says:

    Have you heard that the BB(5) paper has been accepted to STOC 2026?

  26. Scott Says:

    Bo-gyu Jeong #25: No, I hadn’t heard, but what wonderful news to wake up to!

  27. flergalwit Says:

    #16 notGPT and others,
    > I think Shor’s algorithm is a (the) key reason I feel the vast exponentiality of QM in
    > my bones. For me I think it has to do with the classical simplicity, practicality and
    > naturalness of the problem.

    I think I’ve asked this before, but if this argument is correct, why couldn’t you make a similar case that classical physics also has exponentially many resources, based on the AKS primality testing algorithm? Where are the exponentially many potential prime factors being checked if not in an exponentially large space?

    The obvious pushback might be the obvious point that the AKS algorithm does not in fact work by checking every potential prime factor – but then Shor’s similarly doesn’t work by an exhaustive search over an exponentially large space.

    I guess you could argue that Shor’s, while not an exhaustive search, does make use of an exponentially large superposition of states, but this seems like a weaker case (especially as the size of a superposition isn’t a basis-independent property).

    Further there’s this cool lecture (by a completely unknown author :p ) arguing that in many respects we _shouldn’t_ think of quantum states as encoding exponentially much information:
    https://www.scottaaronson.com/democritus/lec13.html

    None of this is to downplay the importance of Shor’s (as an algorithm that appears to give an exponential quantum advantage for a very natural mathematical problem), but it’s unclear to me how much it actually says about the question of whether quantum has exponential resources.

  28. Scott Says:

    flergalwit #27: Yes, you’ve put your finger on why this is a subtle question. But if you wanted to steelman the many-worlds side, you would say: when you write the wavefunction of the universe in a basis spanned by worlds that look like the one of our pre-quantum experience—worlds where particles have more-or-less definite positions in 3-dimensional space, fields have more-or-less definite configurations, etc—then there are exponentially many of those worlds. And quantum computational speedups, like that of Shor’s algorithm, are evidence against the possibility of some clever rewriting that would get rid of that exponentiality while still accounting for our observations.

  29. Vincent Says:

    The paper reduces the number of physical qubits needed to break RSA from millions to 100 thousands. The information I am missing as a lay person to interpret these numbers is: how many qubit do current quantum computers have available? Tens? Hundreds? Thousands? 10 thousands? 2?

  30. Scott Says:

    Vincent 29: State-of-the-art systems right now have ~100 physical qubits, or a few hundred at most. Some companies advertise systems with thousands of physical qubits, but then they can’t control them as well. The new paper assumes that you can control the 100,000 qubits about as well as they’re currently controlled in the 100-qubit systems (with 99.9% fidelity 2-qubit gates).

  31. Vincent Says:

    @Scott #30 Thanks!

  32. Glassmind Duo Says:

    notGPT #21,
    >Before learning about QM, if you were told factoring can be done in polynomial time in some universe, but not by ordinary Turing machines vs there was another universe where time can be stimulated in sqaure root space. And you were given the choice to live in one of them, which one would you choose? Just a roundabout way of asking which would be more surprising.

    I’m no expert and learned QM before CS, but the later sounds intuitively much more surprising. My coworker is more verbose about it, although I’m only 50% confident he got your question right. Did it?

    $from Gemini Pro

    If we translate this concept into an engineering reality, an “Universe B” computer would be a complete inversion of our current hardware philosophy. In our world, memory (RAM/SSD) is cheap, but recalculating things is expensive. In Universe B, **computation is “compressed,”** and memory is the ultimate constraint.

    Here is how the “PC Standard” of that universe would look:

    ### 1. The Death of Mass Storage

    In our universe, simulating a complex system (like a 100-year climate model or a massive open-world game) requires storing billions of variables.

    * **Our Universe:** We need 32GB of RAM and 1TB of SSD to hold the state of the world.
    * **Universe B:** For a process lasting {N} steps, you only need {SQRT(/N)} space. As {N} grows, the memory requirement becomes almost negligible relative to the complexity of the task.
    * **The Result:** Computers wouldn’t have “hard drives” as we know them. A microscopic chip would suffice to hold the “state” of an entire civilization’s history. Information isn’t *stored*; it is *implied*.

    ### 2. The “Sisyphus” Processor (The Recalculation Architecture)

    To fit a long timeline (N) into tiny memory sqrt(N), the machine uses a technique called **recursive checkpointing**.

    Instead of saving every step (which takes too much space), it saves only a few “milestones.” To find out what happened at step 500, it goes back to the milestone at step 0 and **re-runs the simulation** until it reaches 500.

    * **The Universe B Computer:** It would have a CPU of unfathomable speed but a tiny, “stubborn” memory. It spends most of its time “re-living” the past to understand the present, because recalculating is “cheaper” than remembering.
    * **The Analogy:** Imagine a mathematician who, instead of writing down the answer to a long equation, chooses to re-solve the entire 100-page proof from scratch every time someone asks for the result, just to avoid using a second sheet of paper.

    ### 3. Purely Reversible Programming

    Current programming is “destructive”—when you write `x = 10`, the old value of `x` is gone forever. This creates **entropy** (and heat).

    To achieve space efficiency, programs in Universe B would likely be **reversible**. Every operation can be run backward to recover the previous state without storing it.

    * **The Benefit:** According to **Landauer’s Principle**, erasing information creates heat. Since these computers barely “erase” or “store” (they just transform), they would be incredibly energy-efficient. They could run at god-like speeds without melting.

    ### 4. Depth Over Breadth

    Today, we focus on **parallelism** (doing many things at once, which requires a lot of “width” or space).
    In Universe B, the focus would be on **sequential depth**. We would build “ultra-deep” processors capable of running trillions of logical steps in a single chain, knowing the memory won’t “overflow” because the space requirement only grows at the rate of a square root.

    ### Summary Table: Hardware Comparison

    | Feature | Our Universe (P/PSPACE) | Universe B ( Space) |
    | — | — | — |
    | **Primary Resource** | Data/Storage (Big Data) | Logic/Recalculation |
    | **Bottleneck** | Memory Bandwidth | Sequential Speed |
    | **State of Data** | Static (Saved on Disk) | Dynamic (Generated on the fly) |
    | **Heat/Entropy** | High (Information is erased) | Near-Zero (Reversible Computing) |

    ### The Verdict

    The computer of Universe B is an **”Oracle.”** It doesn’t look things up in a database; it “re-imagines” the answer instantly from first principles. It’s a world where **history is a calculation**, not a record.

  33. William Gasarch Says:

    100,000 still seems like a lot, especially if they are harder to engineer.

    So while the result is exciting it still seems years from being practical.

    On the other hand, while unrelated scientifically,
    GO went from `computers suck at it, humans are awesome’
    to
    `Humans suck at it, computers are awesome’

    in a VERY short time.

  34. Scott Says:

    William Gasarch #33: Fault-tolerant QC being “years away” is nothing new. What’s new is that, when you “merely” need to scale up the number of qubits by 1000x while maintaining their quality, it becomes important to ask, well, how many years? 3? 4? 5?

  35. Michael Frank Martin Says:

    Scott,

    I apologize for leaving here this comment on your earlier posts on lookup tables vs. the brain but the comments section on those posts appears to have been closed.

    I wanted to share that I have been working on a theory of the thermodynamic costs of synchronizing otherwise independent descriptions of the world here: https://www.symmetrybroken.com/maintaining-divergence/#the-tax-upon-interaction

    I believe the theory works quite well with your view of computational complexity.

    The lookup table has high observational entropy despite appearing to produce structured outputs. It carries no internal model, and no probability distribution that makes specific, falsifiable claims about the world. Its “description of itself” is essentially uniform: every entry is equally likely to be consulted, and no entry predicts any other.

    A brain, by contrast, has low observational entropy: its internal states are highly structured, mutually informative, and far from equilibrium. The brain’s responses emerge from a compressed model that maintains itself against environmental perturbation through continuous metabolic expenditure.

    The synchronization tax framework predicts that a lookup table would be thermodynamically cheap to query but astronomically expensive to construct and impossible to maintain in any meaningful sense. It doesn’t update, doesn’t learn, doesn’t adapt.

    A brain is expensive to maintain moment-to-moment but cheap to construct relative to a lookup table of equivalent behavioral range, because evolution and development have found compressed representations that generate appropriate behavior from structured priors. The lookup table pays all synchronization costs up front during construction and never pays maintenance. The brain pays modest construction costs (development) and continuous maintenance (metabolism).

    This distinction seems to track your intuition that the process matters, not just the input-output map?

    It looks to me more generally like the higher the “synchronization tax”, the higher the computational complexity.

    But I believe it might also add three things?

    First, there is directionality to the synchronization tax. Computational complexity characterizes the difficulty of a problem but says nothing about which party bears the cost. The KL divergence is asymmetric; it tells you that adjusting description A to match B costs a different amount than adjusting B to match A. This seems important since no model exists in isolation?

    Second, there are maintenance costs. Complexity theory analyzes the cost of computation as a one-shot event. But there is also an ongoing cost of preserving shared descriptions against entropic drift. A solved problem stays solved in complexity theory. A synchronized description degrades in thermodynamics.

    Third, it makes an explicit accounting of the energy budget. Complexity analysis operates in the idealized realm of computational resources (time steps, memory cells, random bits). The maintaining divergence framework ties those resources to actual energy expenditure.

    Complexity theory can tell us that a problem is hard. The synchronization tax can tell us who pays, for how long, and at what energetic cost.

    The synchronization tax framework extends across biological, psychological, and social phenomena because the thermodynamic quantities it measures (observational entropy, KL divergence, and the energy cost of establishing and maintaining correlation) are defined wherever physical systems maintain coarse-grained descriptions of reality. (A coarse-grained description claims certain degrees of freedom as structured, carrying signal rather than noise. “Freebits” are then the quantum states that no system’s coarse-graining has yet reached.)

  36. O. S. Dawg Says:

    Really dumb question: Suppose sometime in the (near?) future a fault tolerant QC is built with the required number of qubits to factor RSA-2048 using the described algorithm. What is the expected execution time to break a single key? Seconds? Hours? Days? I am trying to get an idea of how many keys can be broke per unit time.

  37. Pascal Ochem Says:

    Can we hope to have quantum factorization sooner for smaller numbers? (say less than 400 decimal digits)
    Is there a special version of quantum factorization that would be faster on numbers of the form (p^q-1)/(p-1) with p and q prime, like (11^331-1)/10?

  38. Scott Says:

    O. S. Dawg #36: The text of the paper — which, pro-tip, is a sort of secret “master key” to what’s in it 🙂 — mentions an example runtime of 1 month per 2048-bit integer to be factored, although that could vary wildly with gate times and other physical parameters. Of course you could always run a bunch of QCs in parallel.

  39. Scott Says:

    Pascal Ochem #37: Yes, a big upcoming milestone should be to factor tiny numbers like 15 and 35 using fully fault-tolerant qubits. After that, it’s all a matter of scaling it up.

    There’s very recent work (by Kahanamoku-Meyer?) showing how to do better on composites of the form p2q. I have no idea about the class you mentioned. Of course the cryptographically relevant class is pq, with p and q comparable in size and otherwise mostly random.

  40. notGPT Says:

    Glassmind Duo #32: Interesting that Gemini 3 pro missed it. I was actually about Ryan William’s recent result about how time and space can trade off. What he showed was that, you can take any algorithm that runs in linear time and always make that run in just sqrt(nlogn) space! This is true for the classical universe we already live in, no hidden assumption here expect for a huge blowup in running time!
    Actually they lead me to some questions for Scott again, related to space complexity.

    Scott, if a quantum algorithm has space complexity of n^3 or n^2 would that be impossible to run in our universe (at scale, maybe its a VR simulation where “humanity” would live) without collapsing into a black hole? I guess a classical algorithm could be split up across space since classical bits don’t need to stay in sync, superposition with other classical bits. But what about a quantum algorithm, do all the qubits need to be close together, or could we have a “distributed” QC spread across the universe in various clusters, executing any algorithm with arbitrarily space complexity by entanglement swapping?
    Would be interesting if such issues came up in Harlow-Hayden type situations.

    Another obvious question that comes to mind is, have people tried combining Ryan’s results with quantum algorithms? Could it reduce the qubits needed to run Shor’s algorithm even further? Although Ryan’s result gives exponential blowup in time, but interestingly Cook and Mertz’s tree evaluation only blows up time only up to n^loglogn, at least that’s what I understand from my surface reading of it right now. Could those be useful in some quantum algorithm? I wonder if these questions are just completely trivial or completely new research directions.

  41. notGPT Says:

    flergalwit #27:
    > Why couldn’t you make a similar case that classical physics also has exponentially many resources, based on the AKS primality testing algorithm?

    You’ve asked an question that I really like to think about. I cannot help but giving my thoughts on this question, even though this might be slightly tangential to what you were expecting. But before that I must say, I do agree that, it could simply be that there is no need for talking about Shor’s algorithm running in a QC as truly having exponential amount of resources. It could just be that it is more natural to be this way, quantum speedups are normal in most universes and we are just biased with our classical intuitions and counting stuff wrong. Also I agree with Scott’s point that MIP*=RE is not a good physical example.

    Now to explain, why classical AKS algorithm would not be considered as magical exponential speedup. It has to do with how with classical bits, you really get what you think you are getting. Every single bit, degree of freedom is exactly as identically independent as every other. There is no need to look at other bits to explain the “power” of (its additional contribution) any individual bit in your computer and neither does having more bits around reduce the functionality of what any bit can do. Really they don’t know about each other, whether a bit will flip or not or what states it can be in, explanations for that does not depend on what other bits are doing. (I mean other bits don’t expand or contract the possibilities for what a bit can do mid computation, you can pull it out right there and use it as a part of a different computer)

    This is just the bare minimum you expect in order to write down a problem or any information that you would want to think about. There needs to be at least the ability change of state for independent degrees of freedom if you want to express anything. And basically that is what you start with and what you stick to throughout the computation. Bits don’t acquire any additional property for why they flip in between the computation, you can always trace back and check that the reason state of the computer changed by following the same rules, same flipping mechanism that you had while you were writing down the input.

    Classical computing really is wholly equal to it the sum of its parts, nothing more or less. Compare that with quantum computing where it is actually true that the sum is MORE than its parts. Although again it is possible that, I am biased because I was raised in a classical world. But I don’t think I would let this point go completely unquestioned, even if I was raised in quantum tribe that was philosophically sophisticated.

    Quantum computers do have degrees of freedom that are over and above the what you start with or the “observable” degrees of freedom, or at least need descriptions that large. You suddenly need all this additional degrees of freedom to explain what is happening, but that was never needed when you were defining the input (although I guess you could start with quantum states as input). Another way of saying this would be when the AKS algorithm is running, the behavior or bits or how the state is changing need no more explanation than you needed for explaining multiplication by classical bit flipping. But even though both classical and quantum computers can do multiplication in near linear time, for factoring how they are reaching the answer, explaining that seems to require vastly different degrees of freedom in the two cases.

    That is not to say we should be completely unappreciative and unconcerned about classical computers as to be surely non-magical. We should look into the culture of our own tribe, if it is too immersed in naive realism. Indeed, even setting aside the recent super weird results mentioned above, just the fact that you can have Turing universality so easily should make one appreciate the Gods above. You can really write down any state you wish to write down (of course you wishing it does create a copy in your brain but even with implicit constraints, if it fits those constraints) and we are “free with our will”, unshackled to bring into existence any state we want (Of course complexity theory may prevent us with different resources). Although then again, this is not that surprising, just from the anthropic point of view, I doubt intelligent beings like humans that flexibly manipulate so many concepts would even emerge if rules of computation were too restrictive. But for quantum computers nature seems to be doing true magic here. Nature seems to be able to pull exponentially many rabbits out of its hat and yet when you start examining this hat individually atom by atom, you find basically nothing at all.

    This does not happen with classical bits though, you can just explain a classical algorithm by examining what each part is doing, independent of what everything else is doing.
    QM in this sense does very slightly weaken physicalism thesis, and honestly if I set my standards for physicalism in 1800s or so I would just have to accept that physicalism is wrong. Because my “naive” intuition would not have said ability to computed state probabilities still basically is within physicalism (which of course my actual intuitions woudl mostly agree with on reflection even then). I wonder what Scott would say about this.

    In thinking about this, I notice QM shows another sign against physicalism (which we could have set ourselves as falsifiable criteria before thinking really carefully about all this): INEFFABILITY! Under physicalism we might have thought every state could also be expressed basically using numbers. Although unlike qualia we can explain why there is some amount of ineffability in QM. It’s simply because for any system with even a few thousand qubits, there would basically be no way to write down the state. So its an easy reason behind why most quantum states are ineffable. But isn’t that just a practical issue? Well I think the fact that going through an exponential amount space would basically surely be severely changing your
    brain state anyway because of thermodynamics, makes it a fundamental issue in a sense. Maybe Scott could say something here too, I’ve always wondered if you can somehow preserve your brain state even after exponential time.
    Although it is interesting that, we cannot spout quantum states out of our mouths either (unlike classical states) because of no-cloning theorem. In some sense, making novel quantum states truly not shareable?

    Anyhow aside from all that, in thinking about actual exponentiality of QM in the time between replying with this comment. I think I see why Scott is not a many-worlder as much as other that champion it. It seems to me many worlds might not actually explain where the computation is happening (at least any more satisfactorily than copenhagen etc). I hope to write another comment soon clarifying how I think I am moving much more towards Scott’s position now, regarding the interpretation business.

  42. HadamardHorns Says:

    Fascinating news! With all of the new stuff and efforts UT Austin is emphasizing, especially with the new School of Computing, is quantum computing going to be a part of the new program? Or what do you know that you can tell? Especially since it feels like Austin has a dozen quantum initiatives.

  43. Raoul Ohio Says:

    William Gasarch #33:

    It may or may not be relevant that GO is about the simplest game ever, with only a couple rules.

  44. Glassmind Duo Says:

    notGPT #40,

    Pardon me if this is an irrelevant nitpick*, but I wouldn’t say Williams’s result matches a universe where we could trade ~sqrt(n) space for ~n time steps, period. This is because his algorithm implies precomputing for ~n time steps before you can begin to cash in on that tradeoff. In contrast, QM should truly allow trading a physical resource for large savings in time steps, full stop.

    *obviously, your intent was to set up an ‘aha!’ moment as an entertaining intuition pump, and it worked.

  45. notGPT Says:

    Scott, after thinking about many worlds in the mean time, it seems to me that many worlds does not help explain any of the spooky action or exponential advantage from QM.

    Bell inequality violation still basically is non local in many worlds(?) Winners of CHSH game cannot really explain how they won the game even invoking multiple worlds. Everyone in their own world just sees a bunch classically Impossible to explain correlations. It’s not like the qubits decide to split the universe and decide to take on correlated values as soon as they are created. The splitting happens when they measured and likely far away from each other. Which is basically spooky action still.

    Similarly for Shor’s algorithm running in many worlds. Subconsciously I always thought, if you gather all the people, all the different Alice and Bobs from all the worlds, standing in front of a QC running Shor’s algorithm, they would be able to point to different classical bits from their own worlds. And everyone would see that the QC actually does manipulate and use all the resources among all these worlds (at least give a limited, partial account of the exponentiality). But, doesn’t everyone in all the worlds basically observe the same QC in superposition. And ultimately there aren’t even separate worlds really as the computation actually hasn’t decohered away from superposition as it is running. So, if there was one world with a QC running, how is the exponentiality being explained anyway. Maybe we should look for interpretations (or version of many worlds) that actually try to engage with the exponentiality.

    Are the observations above correct? I think I am finally seeing and converging towards your attitude for many worlds (and maybe the zen anti interpretation) because of these considerations.

  46. anon Says:

    notGPT #45

    Yes, the standard many worlds is nonlocal. However, a couple of months ago I saw a comment on this blog link a paper that presents a fully local formulation of the many worlds interpretation.

    In short: measurement does not split the entire universe but only the measured qubit and it’s immediate environment; at the time of the splitting there is no correlation. When the players meet again to decide whether they win or lose, the separately split worlds must be paired up to create worlds in which both players exist simultaneously. The freedom to choose who you pair with is what allows for the appearance of “impossible” correlations in these worlds.

    As for Shor’s algorithm; as long as the algorithm is running there is indeed only one world, but it is one with exponentially many moving parts. And yet, if you kick the computer open, only polynomially many pieces fall out. The puzzle is where did the other parts go? How could so much stuff just vanish without a trace?
    Copenhagen interpretation says that it was never there to begin with; all of that “wavefunction” is just a mathematical tool to describe the results of the computation.
    Many worlds says that all the parts are still there, doing what they’ve always been doing, but you can only see the parts that are entangled with you (which is at most polynomially many parts).
    (And entanglement implies superposition so there are exponentially many versions of you, each of whom sees a different subset of computer parts.)

  47. notGPT Says:

    Scott, would be great if you could give us some reply, if the above comments (#45, #46) are in the right direction or in crazyland.
    Also the questions on space complexity of algorithms may be interesting to you. (#40)

    anon #46 seems to have made a very interesting comment that might strengthen many worlds. I also found this video by Gilles Brassard related to the locality paper, very exited to watch this. Although I would like to ask anon, why am I entangled with and can see only polynomially parts, if the QC itself is harnessing exponentially many parts.

  48. Scott Says:

    notGPT #47: I have less to say about this than you might think. The Many-Worlders have detailed stories to tell about all these things: a story about the CHSH game (they say that the requirement for superluminal signaling vanishes, once you drop the crucial unstated assumption of Bell’s theorem that measurements have single outcomes), a story about Shor’s algorithm (they say that it directly demonstrates the exponentiality of the wavefunction that, according to them, is a central aspect of reality itself at the deepest level), and so forth. Where people’s mileage varies is only on how much explanatory power they think these stories provide.

  49. notGPT Says:

    Scott, why does the following experiment not work for testing copenhagen interpretation? (and possibly strengthening some form of many worlds)

    Get a large enough QC. Inside that simulate a virtual quantum world with macroscopic objects. Take humans, detectors or “the wall” behind the double slit 🙂 and put these objects in superposition. If you get interference pattern, copenhagen can be considered wrong(?) Since copenhagen always says, detectors do not go into superposition. I guess trying actual double slit with these objects will lead to wavelengths near the planck length. So it would be hard to justify direct double slit interference. But can we not show any other experiments showing superposition to disprove copenhagen?
    I don’t know why this doesn’t work. Maybe such virtual worlds are not prepareable or even if that experiment happened inside QC, we won’t know the results (maybe coherent interference events have exponentially low probability still). What goes wrong fundamentally here? Or is it that, nothing goes wrong and copenhagen still survives because it does not say anything like what I said above.

    Also I get that the criteria that I am mentioning above (for many worlds to “explain” spooky actions) is not really scientific but does feel like to me that it would be a more coherent compression of our knowledge, if many worlds somehow could explain the non localities in a sensible way. Since at that point, this additional fundamental conceptual primitive of non-locality will not be needed by the side in our theories obeying relativity.

  50. Scott Says:

    notGPT #49: The reason why that wouldn’t straightforwardly “disprove Copenhagen” is that sophisticated Copenhagenists have always been willing to apply the standard unitary rules of QM to other people—i.e., to people besides themselves! Yes, this feels solipsistic, having as its natural limit point “I am the only observer who matters, and the system that I observe is the entire rest of the universe, meaning everything besides my own brain.” But the Copenhagenists (and their close cousins the QBists) have a whole worldview wherein the goal of physics is to account for the experiences of observers (or in more modern parlance, “agents”), rather than positing what they consider to be a mythical and absurd “God’s-eye-view, objective state of the whole universe.”

    My personal view is that the Copenhagenists have been able to get away for generations with punting on the central questions, only because agents in superposition have remained firmly in the realm of science fiction. I think that, in a world where we could routinely do the sort of experiment you describe, they’d finally need to put their cards on the table, and declare:

    – Do these superposed beings count as actual agents, but it’s still OK because each agent can retreat to his or own own experience of not being superposed, as in solipsism?

    – Or alternatively, does the very fact that we were able to do an interference experiment with some being show that it was never a “true conscious agent” in the first place, but just a simulation or p-zombie or whatever?

    In any case, though, it’s not as though the Copenhagenists make some clear empirical prediction that differs from the prediction made by the many-worlders for these scenarios.

    I regret that I won’t answer followup questions on this—this is a conversation that can easily continue for hours, days, months (indeed, we’ve already had many versions of it on this blog over the decades), and alas, I need to do other stuff this weekend!

  51. flergalwit Says:

    Scott #28, thanks that makes sense. I still think that’s a better argument for “quantum algorithms such as Shor’s cannot be explained by a classical world (or even a subexponential array of classical worlds)” than “quantum mechanics has exponential resources”. Point taken though.

    NotGPT #45 and other comments.
    One thing that’s important to understand about MWI (and you may already know this, but your talk of the non-locality of splitting made me raise it) is that splitting and even “parallel universes” are not fundamental assumptions. Rather they are approximate, emergent phenomena – or to use Scott’s expression from an older comment thread, part of the “verbal patter” surrounding the interpretation.

    Fundamentally, MWI posits that the Hilbert space state vector describes the state of reality, and that it evolves at all times by the Schrödinger equation (including during measurements). From there you can easily show that after a measurement of a qubit, the resulting qubit-environment will be in a superposition of macroscopically distinct states. These are then interpreted as “parallel worlds” because they cannot interact with each other. But everything other than the first sentence of this paragraph is a deduction about non-fundamental properties. What’s actually real in a fundamental sense is the state vector itself, and as long as the Hamiltonian underlying the theory is local, the evolution of the state vector is local, and in this sense MWI is local. The apparent non-locality of “universe splitting” and other emergent features of the theory is irrelevant to this – similar to how (to use a very loose analogy) the apparent FTL motion of a dot on the clouds resulting from a fast rotating laser doesn’t contradict locality (the dot being just an approximate, emergent feature of a particular solution).

    In fact I lean towards MWI myself, though not because of Shor’s or any other quantum algorithm (unlike the hypothetical Many Worlder in Scott’s #48). All of these algorithms have perfectly good (and local for that matter) descriptions within MWI, but if I didn’t already lean towards MWI I don’t think Shor’s etc would shift the needle much in my case.

  52. Michael Frank Martin Says:

    Nobody asked me here, but fwiw, Scott’s point about there not being a lot of explanatory power gained by Many Worlds (or Superdeterminism, Transactional Interpretation) beyond what you get from standard Copenhagen/Von Neumann QM is why I too never felt attracted to or interested in engaging in debates about foundations.

    But recently I was thinking pretty deeply about the Quantum Zeno Effect, and especially about the state of the system as the efficiency eta in the original Kwiat et al. setup approaches 1. (This is the limit you approach in detecting the presence of a bomb in the leg of an interferometer in which the bomb sits without setting it off, exploiting interaction-free measurement between the single photon that spookily manages to sense the bomb’s presence.) As you approach that limit, something interesting happens to the system — namely, the mutual information between the photon and the bomb is maximized while the thermodynamic disturbance to the measured system is minimized.

    For me, this felt like a pretty significant fact for the measurement problem. I wrote a bit more about that here: https://www.symmetrybroken.com/it-from-bit-bit-from-it/

    It’s not a proof that the measurement problem is an illusion, but it strongly suggests (to me at least) that the collapse of the wave function is an observer-dependent effect that is tied to the second law of thermodynamics. If the lossless limit is a boundary condition, then there is a thermodynamic cost to synchronizing the state of system A with system B.

    This all fits cleanly with the observational entropy framework and Carlo Rovelli’s observational approach to QM. https://arxiv.org/abs/quant-ph/9609002

    And if you let the relational perspective sink in deep, then it starts to feel like not only the collapse of the wave function, but consciousness itself may simply be the “view from the inside” that a complex system gets when it pays for interaction with with its environment with energy, producing thermodynamic heat in that environment.

    https://www.symmetrybroken.com/the-hard-problem-as-hidden-relationality/

    From the outside, we do not see a collapse. We see a system that remains entangled with its surrounds until we interact with one/both of those systems. (And thereby establish consistent histories.) The measurement problem and consciousness itself may be observer-dependent phenomena.

    I like this way of looking at things better because 1) it dissolves the measurement problem rather than resorting to more complicated ontology to explain it, and 2) it produces testable predictions that could be ruled out. We haven’t yet been able to carry out the tests 2), but we might be getting close thanks to all the work going into building quantum computers?

    Anyway, offered in good faith as a reply to a fellow traveler. I get why Scott didn’t want to engage further on the topic.

  53. Dacyn Says:

    My personal view is that the Copenhagenists have been able to get away for generations with punting on the central questions, only because agents in superposition have remained firmly in the realm of science fiction. I think that, in a world where we could routinely do the sort of experiment you describe, they’d finally need to put their cards on the table, and declare:

    – Do these superposed beings count as actual agents, but it’s still OK because each agent can retreat to his or own own experience of not being superposed, as in solipsism?

    – Or alternatively, does the very fact that we were able to do an interference experiment with some being show that it was never a “true conscious agent” in the first place, but just a simulation or p-zombie or whatever?

    I think you would need a specific kind of technological development in order to force Copenhagenists into a choice like that. Specifically, you would need a computation that was at the same time simple enough that you could compute what the results of an interference experiment should be, but complicated enough that it was plausibly agentic. I find it doubtful that there exists such a computation, even theoretically.

  54. Del Says:

    Micheal#52

    Rovelli’s interpretation of quantum mechanics (and your take of it) does not solve anything. Sure, it somewhat “explains” (postulates, actually) consistency across different observers.

    But the deep problem with the “measurement problem” is not only that potential lack of consistency. In your view, Schroedinger’s cat is indeed dead and alive at the same time for you, while it is dead for the poor critter, and you postulate that there is no inconsistency here. Fine, I’ll give you that, you (Rovelli, actually) solved that part.

    But why must it be that way? Why in some circumstances and for some observers the time evolution stops being unitary and collapses? You wave your hands and say interaction, entropy, you must pay the price. Fine I “pay the price” (I actually quite dislike that expression in this context). Why can’t the cat experience being in dead and alive superimposition after having “paid the price”? You don’t say. Perhaps you are thinking of either (or a combination of both) spontaneous wavefunction collapse, or simply remaining quantistic superimposition, but entangled with so many degrees of freedoms which are changing so fast to make it “thermodynamically disappear”.
    So if that’s the case you are even advocating https://en.wikipedia.org/wiki/Objective-collapse_theory or for https://en.wikipedia.org/wiki/Quantum_decoherence

  55. Glassmind Duo Says:

    Michael Frank Martin #52,

    Would you mind to tell more about these testable predictions?

  56. Michael Frank Martin Says:

    Del #54

    `But why must it be that way?` Because of the second law of thermodynamic? At least the way I understand how this works, when system A becomes entangled with system B two things happen. First, system A now knows something more about the state of system B than it did before the entanglement (and system B something more about system A). Second, system A and system B give up (lose, pay, you pick the verb here if you don’t like pay, it’s not that material to the picture) energy to *their* environment. What the QZE seems to show is that it is literally not physically possible to make an interaction-free measurement (of perfection efficiency eta) without at least *some* loss to the environment — in the QZE case, some leakage through the mirrors, for example. The Landauer limit seems to be the floor?

    The cat doesn’t experience superposition because what the cat experiences is unique to its perspective from the inside of the cat plus box system. From outside the box, we don’t know. If you haven’t read Rovelli’s 1996 paper, then I strongly suggest reading it carefully because that’s how I got there. Qualia are just what it means to experience things from the inside.

    I’m not too familiar with “objective collapse theory” but I have read and thought a bit about Penrose and Orch-OR. I believe these are unnecessary to explaining collapse (and by extension the hard problem of consciousness) if we take the relational perspective to its logical conclusion.

    The relational perspective is wholly consistent with Zurek’s decoherence theory. https://www.symmetrybroken.com/a-stationary-action-is-stable-information/ In the quantum to classical transition, as the “synchronization tax” gets paid, a stable classical state emerges as a fixed point in the multiplicative noise.

  57. Michaël Frank Martin Says:

    I got pulled away and clicked submit before I should have.

    I want to finish the thought on Zurek. Rovelli and Landauer are consistent with Zurek, but what I think I’m seeing in the QZE is a Landauer limit for the quantum-to-classical transition.

    If we view the limit where measurement efficiency eta approaches 1 as a boundary condition, then what the Quantum Zeno Effect seems to show is that a quantum-to-classical transition requires an irreversible step that dissipates at least the Landauer bound when two systems become entangled.

  58. ECCQCBreak Says:

    How many qubits are required for breaking 256 bit ECC (Elliptic Curve Cryptography). I thought the group multiplication and addition operations there are expensive which will translate to larger qubit profile than for braking 2048 bit RSA. Are you thinking some parametrization combined with ECC (Error Correction Codes)?

Leave a Reply

You can use rich HTML in comments! You can also use basic TeX, by enclosing it within $$ $$ for displayed equations or \( \) for inline equations.

Comment Policies:

After two decades of mostly-open comments, in July 2024 Shtetl-Optimized transitioned to the following policy:

All comments are treated, by default, as personal missives to me, Scott Aaronson---with no expectation either that they'll appear on the blog or that I'll reply to them.

At my leisure and discretion, and in consultation with the Shtetl-Optimized Committee of Guardians, I'll put on the blog a curated selection of comments that I judge to be particularly interesting or to move the topic forward, and I'll do my best to answer those. But it will be more like Letters to the Editor. Anyone who feels unjustly censored is welcome to the rest of the Internet.

To the many who've asked me for this over the years, you're welcome!