Thursday, April 30, 2026
Home / Science / Google AI breakthrough means chatbots use six time...
Science

Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance

CN
CitrixNews Staff
·
Google AI breakthrough means chatbots use six times less memory during conversations without compromising performance
A gif showing various colored rectangles surrounding a yellow square in the middle, with arrows connecting the various shapes. TurboQuant transforms data in working memory into a compressed version that the AI model can then use just like the original data, but using much less memory. (Image credit: Google) Share this article 0 Join the conversation Add us as a preferred source on Google Newsletter Subscribe to our newsletter

Google engineers have developed a method to compress artificial intelligence (AI) data so that it requires up to six times less working memory to function.

With the new system, called TurboQuant, AI algorithms could retain the same amount of information and perform equally powerful computations, but with significantly less memory hardware, the company says.

For example, if you ask ChatGPT what the weather will be like tomorrow in your area, it may store words like "weather" and "tomorrow," along with your location and partial guesses, like "It might be rainy," in the KV cache while it generates its response. The larger an AI model's KV cache is, the more information it can keep track of at once and the more powerful it is.

A single sentence uses only a few dozen tokens — the building blocks of AI prompts and output text — but storing hundreds of thousands of tokens in the KV cache for more sophisticated work can require tens of gigabytes of memory. These memory requirements scale linearly depending on the number of users, and ChatGPT is known to receive billions of requests every day.

The compression algorithm will decrease the amount of working memory an AI model needs to perform the same computations. It does so via a process called quantization, which results in values represented by fewer bits.

Although Google has been using quantization on its neural networks for many years, it has typically been applied statically — that is, the compression is done once and doesn't change as the model runs. The difference with TurboQuant is that it reduces the KV cache's memory in real time ‪—‬ a tricky feat given that it must keep the quantized data in the cache accurate and up-to-date while the model generates outputs.

Sign up for the Live Science daily newsletter nowContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors

In a statement, Google representatives said TurboQuant "showed great promise for reducing key-value bottlenecks without sacrificing AI model performance" in tests in Meta's Llama 3.1-8B, Google's Gemma and Mistral AI models.

"This has potentially profound implications for all compression-reliant use cases, including and especially in the domains of search and AI," they added.

Is this Google's "DeepSeek moment"?

Google says TurboQuant could reduce the KV cache's size by a factor of at least six times, using two methods: PolarQuant and Quantized Johnson-Lindenstrauss (QJL).

To interpret these methods, it is important to understand that data in the AI's working memory has been turned into vectors — groups of numbers that have a defined size (radius) and direction (angle). Vectors can be mathematically "rotated," meaning they are reexpressed in a different, common coordinate system.

PolarQuant quantization reexpresses AI data from Cartesian coordinates (along X, Y and Z axes) into polar coordinates (angles around a single point). The rotation aligns the angles of the vectors more consistently, thereby allowing them to be compressed into fewer bits with less additional scaling information. The vectors then go through the QJL optimization method, where they are adjusted very slightly to correct any computational errors stemming from the quantization.

In a post on the social media platform X, Matthew Prince, CEO of web security company Cloudflare, called the compression breakthrough "Google's DeepSeek" ‪—‬ a reference to the surprise release of the Chinese firm's AI model that achieved comparable results to leading chatbots at a fraction of the cost.

Google's March 24 unveiling of TurboQuant sent stocks in memory companies like SanDisk, Western Digital and Seagate plummeting. But although the discovery could prove pivotal in improving AI efficiency, it is still at the lab stage and has yet to be widely rolled out in real-world models.

Related stories

Moreover, it will compress only the working memory used during inference. This is when it is generating a response to a prompt. A model's training typically requires up to four times more memory than that, so the actual impact on memory will be relatively small.

This is what Merrill Lynch banker Vivek Arya explained to concerned investors in a note, according to ZDNet: "(The) 6x improvement in memory efficiency [will] likely [lead] to 6x increase in accuracy (model size) and/or context length (KV cache allocation), rather than 6x decrease in memory."

Google officially unveiled TurboQuant at ICLR 2026, which took place April 23-27 in Rio de Janeiro, and will formally present PolarQuant and QJL at AISTATS 2026 in Tangier, Morocco, in early May.

Fiona JacksonFiona Jackson

Fiona Jackson is a freelance writer and editor primarily covering science and technology. She has worked as a reporter on the science desk at MailOnline, and also covered enterprise tech news for TechRepublic, eWEEK, and TechHQ. 

Fiona cut her teeth writing human interest stories for global news outlets at the press agency SWNS. She has a Master's degree in Chemistry, an NCTJ Diploma and a cocker spaniel named Sully, who she lives with in Bristol, UK.

View More

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Logout

Originally reported by Live Science