
Throughout this series, we have explored the extraordinary promise of quantum computing: breaking cryptography, simulating molecules, optimizing billion-variable problems. If quantum computers are so powerful, a reasonable question follows: why do we not have them solving these problems today?
The answer is what I call the Quantum Goldilocks Problem. Qubits must be perfectly isolated from their environment to preserve the delicate quantum states (superposition and entanglement) that give them their computational power. At the same time, they must be perfectly controllable, meaning we need to manipulate those states with exquisite precision to perform computations. These two requirements are in permanent, fundamental tension.
Consider what a qubit actually is. In IBM and Google's superconducting quantum computers, a qubit is a tiny circuit made of superconducting metal, cooled to approximately 15 millikelvin, colder than outer space by a factor of 200. At this temperature, the circuit exhibits quantum behavior: current flows in superposition of clockwise and counterclockwise simultaneously.
To perform a computation, you must send precisely calibrated microwave pulses to rotate the qubit's state on the Bloch sphere (the mathematical representation we discussed in Part 1). Each pulse must have exactly the right frequency, amplitude, duration, and phase. A single-qubit gate might require a pulse lasting 20-50 nanoseconds, timed to sub-nanosecond precision.
But every control channel that allows you to manipulate the qubit also provides a pathway for environmental noise to corrupt it. Every microwave line, every flux bias, every readout resonator is both a control interface and a potential noise antenna. Making qubits more controllable inherently makes them more susceptible to noise.
This is the Goldilocks constraint: not too isolated (or you cannot compute), not too exposed (or noise destroys your computation), but just right. And "just right" is extraordinarily difficult to achieve and maintain.
Environmental noise collapses fragile quantum states, limiting circuit depth and fidelity. But "noise" is an umbrella term that encompasses many distinct physical mechanisms, each attacking qubits in different ways.
Thermal noise is the most fundamental. Any system above absolute zero has thermal energy that can excite qubits out of their ground state. This is why superconducting qubits are cooled to 15 millikelvin in dilution refrigerators, massive cryogenic systems that use a mixture of helium-3 and helium-4 to achieve temperatures just above absolute zero. Even at these extreme temperatures, residual thermal photons in the microwave cavities surrounding qubits can cause bit-flip errors.
Electromagnetic interference from the external environment, including stray magnetic fields, electronic noise from control electronics, and even radio frequency interference from nearby equipment, can couple to qubits and disturb their states. Quantum processors are typically housed in heavily shielded environments, but perfect shielding is impossible.
Cosmic rays and ionizing radiation represent a recently recognized and particularly insidious noise source. A single cosmic ray or gamma ray striking the substrate of a superconducting quantum processor can deposit enough energy to create a burst of quasiparticles (broken Cooper pairs) that simultaneously disrupt multiple qubits across the chip. Google's research team published a study showing that cosmic ray impacts could simultaneously corrupt dozens of qubits, a phenomenon they called "correlated errors." This is particularly dangerous because most error correction schemes assume errors are independent.
Cross-talk between qubits occurs when the control signals intended for one qubit inadvertently affect neighboring qubits. As processors scale to more qubits packed more densely, cross-talk becomes increasingly difficult to manage. Stray couplings between qubits that are not supposed to interact generate unwanted entanglement that degrades computation.
Materials defects, particularly two-level systems (TLS) at material interfaces in superconducting circuits, are a major source of noise. These microscopic defects in the oxide layers and substrate surfaces can absorb and re-emit energy, creating fluctuating noise that is extremely difficult to eliminate. Materials science improvements (better substrate cleaning, epitaxial growth, surface treatments) are among the most impactful areas of current quantum hardware research.
Physicists characterize qubit noise using two key timescales:
T1 (relaxation time) measures how long a qubit retains its energy state. If you prepare a qubit in the |1> state, T1 is the characteristic time before it spontaneously decays to |0>, analogous to the decay of a radioactive atom. T1 is determined by the coupling between the qubit and energy-dissipating modes in its environment. Current superconducting qubits achieve T1 times of roughly 100-500 microseconds. Trapped-ion qubits can achieve T1 times of seconds to minutes.
T2 (dephasing time) measures how long a qubit retains the phase coherence of its superposition state. Even if a qubit does not lose energy (T1 decay), fluctuations in its environment can cause the relative phase between |0> and |1> to drift randomly, destroying the superposition without changing the energy. T2 is always less than or equal to 2*T1, and in practice is often significantly shorter. Current superconducting qubits achieve T2 times of roughly 50-200 microseconds.
Gate fidelity measures how closely a quantum gate operation matches the intended transformation. A fidelity of 99.9% means that, on average, one in every 1,000 gate operations produces an error. This sounds good until you consider that a useful quantum computation might require millions or billions of gate operations.
Current state-of-the-art fidelities:
For context, running a 1,000-gate circuit with 99.9% gate fidelity means the probability that no error occurs in the entire circuit is roughly 0.999^1000 = 37%. For a 10,000-gate circuit, it drops to 0.005%. For the millions of gates required by algorithms like Shor's, the probability of an error-free computation is effectively zero.
This is why raw qubit counts alone are misleading. A processor with 1,000 qubits at 99% fidelity is far less useful than one with 100 qubits at 99.99% fidelity. Quality matters far more than quantity in the current era.
Classical computers also experience bit errors: cosmic rays can flip a bit in RAM, electrical noise can corrupt a signal on a bus. Classical error correction is straightforward: use redundancy. Store each bit three times and take a majority vote. If one copy is corrupted, the other two outvote it.
Quantum error correction (QEC) must overcome three unique obstacles that make it profoundly more difficult than classical error correction.
The no-cloning theorem, a fundamental result of quantum mechanics, states that it is impossible to create an exact copy of an arbitrary unknown quantum state. This means you cannot simply replicate a qubit three times and take a majority vote. The most basic classical error correction strategy is forbidden by physics.
In classical systems, you can freely inspect a bit to check for errors without affecting its value. Measuring a qubit, however, collapses its superposition. You cannot look at a qubit to see if it has an error without destroying the very quantum state you are trying to protect.
Classical bits experience discrete errors: a 0 flips to a 1, or vice versa. Qubit errors are continuous: a qubit can rotate by an arbitrarily small angle away from its intended state. Error correction must handle this continuous error spectrum, not just bit flips.
Despite these obstacles, quantum error correction is possible, a remarkable theoretical achievement first demonstrated by Peter Shor (yes, the same Shor) and Andrew Steane in the mid-1990s.
The core idea is to encode a single logical qubit across multiple physical qubits in such a way that errors can be detected and corrected without measuring (and thus destroying) the encoded quantum information. This is accomplished through syndrome measurement.
Syndrome measurement works by measuring specific correlations (called stabilizers) between the physical qubits in the code block. These measurements reveal whether an error has occurred and what type of error it is (bit flip, phase flip, or both) without revealing the actual logical state of the encoded qubit. The analogy is subtle but powerful: you learn that "qubits 3 and 7 disagree" without learning what value either qubit holds.
The Steane code uses 7 physical qubits to encode 1 logical qubit and can correct any single-qubit error. It was one of the first practical QEC codes and demonstrated the feasibility of the approach.
Surface codes are the leading candidate for large-scale quantum error correction. A surface code encodes one logical qubit as a two-dimensional lattice of physical qubits, with "data qubits" at the vertices and "ancilla qubits" on the edges. Syndrome measurements are performed on the ancilla qubits, detecting errors on nearby data qubits. The surface code has the highest known error threshold of any QEC code: if physical error rates are below approximately 1% per gate, the surface code can suppress logical error rates to arbitrarily low levels by increasing the size of the lattice. This threshold is within reach of current hardware.
Different qubit technologies bring different strengths to the noise and error correction challenge:
Superconducting qubits (IBM, Google): The most mature technology, with the largest processors and fastest gate speeds (tens of nanoseconds). Weaknesses include relatively short coherence times (hundreds of microseconds), sensitivity to materials defects, and the need for extreme cryogenic cooling. IBM's roadmap envisions scaling through modular architectures connecting multiple cryogenic processors.
Trapped ions (IonQ, Quantinuum): Individual ions held in electromagnetic traps, manipulated with laser pulses. Strengths include the highest gate fidelities, longest coherence times (seconds to minutes), and all-to-all connectivity (any qubit can directly interact with any other). Weaknesses include slower gate speeds (microseconds) and challenges in scaling to large numbers of ions in a single trap. Quantinuum's H-series processors have demonstrated the highest two-qubit gate fidelities in the industry.
Photonic qubits (Xanadu, PsiQuantum): Photons as qubits, manipulated with optical components. Strengths include room-temperature operation and natural suitability for networking (photons travel through fiber optic cables). Weaknesses include the probabilistic nature of photon-photon interactions and high loss rates. PsiQuantum is pursuing a manufacturing-first approach, partnering with GlobalFoundries to fabricate photonic quantum chips using existing semiconductor fabs.
Topological qubits (Microsoft): A fundamentally different approach that encodes quantum information in topological properties of exotic quasiparticles (anyons), which are inherently resistant to local noise. If realized, topological qubits could dramatically reduce the error correction overhead. Microsoft announced progress on Majorana-based topological qubits in 2025, but the technology remains less mature than competing approaches.
The overhead of quantum error correction is staggering. Current estimates for the surface code suggest that between 1,000 and 10,000 physical qubits are needed to create a single logical qubit with error rates low enough for useful computation. The exact ratio depends on the physical error rate: lower physical error rates require fewer physical qubits per logical qubit.
Consider the requirements for running Shor's algorithm to factor a 2,048-bit RSA key:
For quantum chemistry simulations of commercially relevant molecules: estimated logical qubits needed range from hundreds to tens of thousands, with similar overhead ratios.
These numbers explain why, despite having processors with over 1,000 physical qubits, we cannot yet run these algorithms. The qubits we have are physical qubits, not logical qubits. The gap between "number of qubits on a chip" and "number of qubits available for computation" is enormous.
In December 2024, Google announced results from their Willow quantum processor that represented a critical milestone for error correction. For the first time, they demonstrated that increasing the size of a surface code (adding more physical qubits) actually decreased the logical error rate, a result called "below threshold" operation.
This may sound like an obvious expectation, but it is not. If physical error rates are too high, adding more qubits to an error-correcting code actually makes things worse, because the additional qubits introduce more errors than the code can correct. Operating below threshold means that the hardware has crossed a critical quality boundary: more qubits genuinely equals less error. This is the foundation upon which fault-tolerant quantum computing will be built.
Google's result showed an exponential suppression of errors as code size increased, with logical error rates cut roughly in half for each increase in code distance. Extrapolating this trend suggests that logical error rates suitable for practical computation are achievable with surface codes of manageable size, likely within the next decade.
The quantum computing industry commonly describes the progression in three phases:
NISQ Era (Now - ~2028): Noisy Intermediate-Scale Quantum. Hundreds to low thousands of noisy physical qubits. Useful for variational algorithms (VQE, QAOA), quantum machine learning research, and small demonstrations. Not yet capable of outperforming classical computers for practical problems at commercially relevant scales.
Early Fault-Tolerant Era (~2028 - ~2035): Tens of thousands of physical qubits with below-threshold error rates, encoding tens to hundreds of logical qubits. Sufficient for quantum chemistry simulations of interesting molecules, modest optimization problems, and potentially cryptographic attacks on smaller key sizes. This is when quantum advantage for specific scientific problems becomes achievable.
Full Fault-Tolerant Era (~2035+): Millions of physical qubits encoding thousands to tens of thousands of logical qubits. Capable of running Shor's algorithm at cryptographically relevant scales, simulating large biological molecules, and solving optimization problems at industrial scale.
A note on terminology: "quantum advantage" (sometimes called "quantum supremacy," though that term has fallen out of favor) means that a quantum computer has solved a specific problem faster than any known classical algorithm running on any available classical hardware.
Google claimed quantum supremacy in 2019 with their Sycamore processor, performing a specific sampling task in 200 seconds that they estimated would take a classical supercomputer 10,000 years. IBM disputed this claim, arguing that with optimized classical algorithms and sufficient classical storage, the task could be performed in days. The debate highlighted that quantum advantage is relative to the best known classical algorithm, and classical algorithms continue to improve.
True, practical quantum advantage, solving a problem of genuine commercial or scientific value faster on a quantum computer than on any classical alternative, has not yet been achieved. It remains the central milestone that the entire field is working toward.
From NISQ to FTQC, the road to practical quantum computing runs through engineering better qubits and scalable quantum error correction. The fundamental challenge is not algorithmic (we know what to compute) but physical and engineering (can we build hardware reliable enough to compute it?).
The progress is real. Gate fidelities are improving. Error correction has been demonstrated below threshold. Multiple hardware approaches are advancing in parallel, providing redundancy and cross-pollination of ideas. The question is no longer whether fault-tolerant quantum computing is possible, but when and on which hardware platform it will be achieved at scale.
In the final installment of this series, we turn from hardware to humans. The quantum future will not be built by machines alone; it requires a new kind of workforce. Part 7 explores the five roles every organization needs to begin planning for today to be quantum-ready when that future arrives.

Ryan previously served as a PCI Professional Forensic Investigator (PFI) of record for 3 of the top 10 largest data breaches in history. With over two decades of experience in cybersecurity, digital forensics, and executive leadership, he has served Fortune 500 companies and government agencies worldwide.

How Apple Intelligence hallucinations exposed fragile market microstructure, and why iOS 26's Liquid Glass UI and FinanceKit API are fundamentally reshaping fintech data provenance, algorithmic trading, and the death of screen scraping.

A deep technical analysis of Notion's architectural security gaps, permission model failures, AI exfiltration vulnerabilities, and why enterprise IT leaders should look past the polished UI before adopting it as a system of record.

Why 95% of enterprise AI investments fail to deliver ROI, and how the rise of the Chief AI Officer and proprietary data systems offers the only path to sustainable competitive advantage.