A Google AI breakthrough is pressuring memory chip stocks from Samsung to Micron
- 10 hours ago
- 1 min read

CNBC — Alphabet's Google on Tuesday unveiled TurboQuant, a new compression method that it says could reduce the amount of memory required to run large language models by six times.
The technique focuses on reducing the key value cache, which stores the past calculations of an AI model so it doesn't have to run them again.
The technique is aimed at making AI models more efficient, a major goal of the leading labs.
Investors fear that this could reduce the demand for AI memory chips, which have been a critical component to train up huge LLMs from companies like Google, OpenAI and Anthropic.
Read the full story | CNBC


