NVIDIA released Ising on April 15 — an open-source family of AI models purpose-built for the two hardest operational problems in quantum computing: calibrating noisy qubits and decoding quantum error correction in real time. Published on GitHub, Hugging Face, and build.nvidia.com alongside the NVIDIA CUDA-Q software platform, Ising is the first major release to frame AI not as a consumer of quantum advantage, but as the control plane that makes quantum advantage possible.
“AI is essential to making quantum computing practical,” NVIDIA CEO Jensen Huang said in the announcement. “With Ising, AI becomes the control plane — the operating system of quantum machines — transforming fragile qubits to scalable and reliable quantum-GPU systems.”
The Two Models
Ising ships as two distinct model families targeting the two bottlenecks that keep quantum computers from being useful at scale.
Ising Calibration is a 35-billion-parameter vision-language model built on Qwen3.5-35B-A3B, with roughly 3 billion active parameters per token. It was trained on multi-modal qubit measurement data and powers agentic calibration automation — letting AI agents continuously retune a quantum processor based on what its own measurements are telling them. The automation target is aggressive: calibration runs that previously took days compress to hours.
NVIDIA reports Ising Calibration outperforms all existing approaches across a suite of six calibration benchmarks.
Ising Decoding is a pair of 3D convolutional neural networks (0.9M and 1.8M parameters) optimized for speed and accuracy respectively. These perform real-time decoding for quantum error correction — the step that turns noisy measurement data from a quantum processor into a reliable classical output. The performance numbers are significant: 2.5× faster and 3× more accurate than state-of-the-art decoding tools.
Both variants ship with a new training framework that supports arbitrary noise models through PyTorch and CUDA-Q, meaning teams can adapt the decoders to their specific hardware rather than being locked to NVIDIA’s reference noise assumptions.
Why Decoding Speed Is The Bottleneck
Quantum error correction is not optional for useful quantum computing. Physical qubits are noisy, so you encode each logical qubit across many physical qubits and continuously measure the syndrome — a pattern that tells you what errors happened without collapsing the computation. Turning syndrome data into corrections has to happen faster than errors accumulate, or the correction falls behind and the logical state decoheres anyway.
This is why Ising Decoding integrating with NVIDIA’s NVQLink QPU-GPU hardware interconnect matters more than the raw accuracy numbers. A 3× accuracy gain is academic if the decode takes too long to close the correction loop; NVQLink is the plumbing that moves measurement data to a GPU running Ising and back to the QPU inside the coherence budget.
Adoption
The partner list is the tell. Early adopters include Academia Sinica, Fermi National Accelerator Laboratory, Harvard’s John A. Paulson School of Engineering, Infleqtion, IQM Quantum Computers, Lawrence Berkeley National Lab’s Advanced Quantum Testbed, and the UK’s National Physical Laboratory. This is not a vendor lock-in play — NVIDIA is seeding the open weights into the labs and national facilities that define the research frontier, betting the ecosystem converges on Ising the way it converged on CUDA.
What This Means for AI
Ising is the clearest sign yet that the frontier isn’t just scaling language models. Two themes from the release matter beyond quantum computing:
Vision-language models as general control systems. Ising Calibration is a VLM, not a specialist architecture. The same type of model that reads charts for a chatbot is now reading qubit measurement data to retune a cryostat. That transfer isn’t cheap — 35B parameters with a custom training framework — but it confirms that the VLM stack is flexible enough to absorb new instrumentation problems. Compare this to the shift toward open agentic models like NVIDIA Nemotron 3 Super: the architectural playbook is increasingly shared across domains.
Small CNNs still win where latency is king. The decoding models are 1-2 million parameters. In a release cycle dominated by trillion-parameter announcements, Ising is a reminder that the right tool for a real-time loop can be a few megabytes of weights. Performance ceilings have gone up for language tasks, but purpose-built small models are still how you meet a millisecond budget.
The Bigger Bet
NVIDIA’s quantum strategy has always been hybrid — CUDA-Q treats a QPU as one more accelerator alongside GPUs, and Ising extends that by treating the AI model as the thing that makes the QPU usable at all. Whether useful fault-tolerant quantum computing arrives in 2028 or 2033, every practical path runs through a classical control system fast enough and smart enough to keep up with the quantum hardware. Ising is that control system, and it’s open.
For teams tracking the intersection of AI and scientific computing, this is the release of the week — not because quantum computing is solved, but because the playbook for getting there just got considerably more concrete.
Sources: NVIDIA Newsroom · Tom’s Hardware · NVIDIA Technical Blog · The Quantum Insider
