Bringing AI-Accelerated Quantum Error Correction to Neutral-Atom Logical Qubits

Combining neutral-atom physics, leakage-aware simulation, and the NVIDIA Ising family of open models for decoding to move faster toward practical fault-tolerant quantum computing

Quantum error correction (QEC) is often described as a quantum problem. In practice, it is also a classical computing problem.

Every round of QEC generates information that must be processed quickly enough to keep up with the quantum hardware. If decoding lags, errors accumulate. If decoding scales poorly, logical performance stalls long before the hardware reaches its full potential. That is why decoding is not a side issue in fault tolerance. It is one of the central bottlenecks.

Infleqtion, with our Sqale neutral atom quantum computer, is building toward tight coordination between neutral-atom logical qubits and accelerated classical compute. Today, we’re excited to announce an integration with the newly released NVIDIA Ising Decoding: an AI predecoder running on a GPU that supercharges our QEC technologies.

The core idea is that instead of sending the full raw syndrome stream into a downstream decoder, NVIDIA Ising Decoding first sparsifies the syndrome data, reducing the decoding burden while preserving the information needed for high-performance correction. This is the kind of hybrid quantum-classical workflow that accelerates our path to useful quantum computing.

In this post, we highlight how utilizing NVIDIA Ising will benefit Infleqtion’s neutral-atom roadmap with Sqale.

Decoding is central to fault tolerance

As logical qubits scale, the classical side of the stack has to scale with them. NVIDIA Ising adds online real-time decoding, GPU-accelerated algorithmic decoders, and low-latency AI decoder inference infrastructure, all aimed at dramatically reducing requirements for fault-tolerant workflows. NVIDIA also highlights AI-based decoding as a promising route to higher accuracy and lower latency in QEC workloads.

That direction aligns naturally with what we see on neutral atoms. As we move from proof points to larger-scale logical architectures, we need decoders that are not only accurate, but that achieve practical runtimes in a real-time loop. This constraint becomes even more pressing as neutral atom systems make significant speedups in measurement runtimes. Recent work by Professor Mark Saffman, Infleqtion’s Chief Scientist for Quantum Information, presented a path to 60 microsecond readout time.

The opportunity with an AI predecoder is to perform a fast learned pass on GPU, sparsify the syndrome representation, and bring a simpler decoding problem to the next stage. In other words, move some of the classical burden into a pretrained inference step that is cheap to execute at runtime. For QEC, that is powerful because throughput and latency matter as much as asymptotic decoder quality.

Neutral atoms are qubits in theory, but qudits in practice

Another reason this is exciting for Infleqtion is that neutral-atom systems are not idealized two-level systems.

Figure 1: Taking into account atomic structure (mF), Infleqtion’s neutral atom qubits have relevant leakage levels that fall into two categories we refer to as |0L> and |1L>; these states produce the same measurement results as |0> (dark image) and |1> (bright image) respectively. Atom loss (state |L> ) presents an additional path out of the computational space, detectable via a secondary loss measurement. In the above, gray lines indicate low-probability error processes that produce leakage out of the computational space, which is mitigated by erasure conversion.

In practice, we must contend with additional states outside the computational subspace, including leakage states |0L> and |1L> that detect like |0> and |1> in state-selective readout.  In many cases, leakage and loss (which itself constitutes another non-computational level |L>) can be desirable error channels, relative to others. This is due to the availability of erasure conversion techniques, such as leakage detection units and direct loss measurement. A useful way to think about this is:

  • The hardware evolves in a richer multi-level state space
  • Readout often still compresses that behavior back into a binary outcome (or ternary when loss is independently detectable, as for Sqale)
  • The decoder sees the consequences of leakage indirectly unless the model is explicitly leakage-aware.

For Infleqtion’s neutral atom systems, a leakage-aware binary mapping can be written simplified as:

def map_readout_state(level: int) -> int:
    """
    Example leakage-aware binary readout map.

    Computational basis:
      0 -> |0>
      1 -> |1>

    Leakage states:
      2 -> |0L>
      3 -> |1L>

    Leaked population may still be observed through the same measurement channel,
    depending on the readout mechanism and analysis pipeline.
    """
    zero_like = {0, 2}
    one_like = {1, 3}

    if level in zero_like:
        return 0
    if level in one_like:
        return 1
    raise ValueError("Unexpected state label")

That is, even when the underlying atom occupies a leaked state, the observed readout may still collapse into a “0-like” or “1-like” bucket. From a decoder perspective, this matters a lot: the syndrome stream is now shaped not only by Pauli-type noise, but by out-of-subspace population as well.

Integration with Leaky maintains strong logical performance

We integrated the NVIDIA Ising AI-assisted decoding flow with Leaky, so the evaluation now includes the impact of leakage rather than assuming an idealized two-level noise model. That is a more realistic test for neutral-atom QEC, and a more accurate way to assess whether an acceleration strategy will be successful on real hardware.

What we found is logical performance remains strong even after including leakage effects. In other words, we do not have to choose between realism and speed. We can model a more realistic neutral-atom error process and still see competitive logical error rates, while benefiting from a substantially faster decoding path.

Figure 2: Heuristic simulations, using `leakysim` & NVIDIA Ising AI predecoder (pre-trained inference only), inserting leaked state errors on top of a fixed, constant Pauli noise model for a surface code (d=9) memory experiment (using 262,144 shots per basis)

This result fits with the broader direction of our platform. Infleqtion has already demonstrated strong neutral-atom performance, including 99.73% post-selected two-qubit fidelity and 12 logical qubits. Our roadmap includes an explicit step from error detection to loss correction.

The next step: from leakage to erasure

In neutral-atom platforms, there are scenarios where leakage is naturally converted to loss and then detected. Circuit-level gadgets may also be used for leakage detection. Once leakage or loss is detected, it becomes an erasure signal. This does not necessarily tell us the error value, but we do know where the problem occurred. That information is incredibly valuable for decoding.

A long line of QEC work has shown that erasure information can dramatically improve decoding performance because the decoder is no longer searching blindly over all locations. Recent work continues to push erasure decoding forward, including new results on degenerate quantum erasure decoding. Infleqtion’s own roadmap materials now explicitly call out “loss correction” as the next stage after error detection.

That suggests a compelling future architecture for neutral atoms:

  1. identify detectable loss or leakage,
  2. surface that information as erasure,
  3. train the NVIDIA Ising AI predecoder on this richer signal,
  4. feed a much easier decoding problem into the downstream stack.

This is not just about faster decoding. It is about better-informed decoding. This is a strong fit for the neutral atom modality of quantum computing. The physics provides structure that the decoder can then take advantage of.

Why this matters for Sqale

This work fits squarely into Infleqtion’s broader hybrid-computing strategy. Our recent Q4Bio post emphasized a full-stack GPU+QPU workflow spanning training, simulation, and real-time feedback. The same theme appears here: the road to useful logical qubits is not just better atoms or better gates, but tighter integration with classical computing.

We are working closely with NVIDIA to bring these AI-driven innovations into our Sqale QPU roadmap. NVIDIA NVQLink architecture is designed for low-latency QPU↔GPU workflows, and Infleqtion has already publicly announced both NVQLink integration for Sqale and an NVQLink-enabled Sqale system planned for Illinois.

That is why this line of work matters beyond one decoder benchmark. It is part of the operating model for fault-tolerant neutral-atom computing:

  • physical qubits with strong native performance,
  • logical architectures built for scale,
  • GPU-resident classical acceleration for decoding and control,
  • and AI models that make the classical side faster and smarter.

Conclusion

Fault tolerance will not be won by quantum hardware alone. It will be won by systems that combine the right hardware, the right control stack, and the right classical acceleration. For QEC, decoding sits directly at that interface. And for neutral atoms, leakage is part of the real physics that any serious decoder must confront. Handled properly, with erasure conversion and proper decoding, leakage (including loss) becomes less harmful than computational-space errors.

That is why we are excited about NVIDIA Ising. By sparsifying syndrome data before the full decoding stage, Ising Decoding points toward a more scalable path for real-time QEC. By integrating that workflow with Leaky, we can evaluate it under a more realistic neutral-atom noise model. And by looking ahead to erasure conversion and erasure-aware decoding, we can start to visualize how we will build an even stronger stack, one that does not just tolerate the quirks of the hardware but turns them into information the decoder can use. A stack that brings together neutral-atom hardware, logical qubits, GPUs, and AI into one coherent architecture for fault-tolerant quantum computing.