Alphabet‘s (NASDAQ: GOOG) (NASDAQ: GOOGL) Google sent waves through the artificial intelligence (AI) hardware market last month when it detailed its TurboQuant technology in a blog. In simple terms, TurboQuant is a compression method that reduces the size of large language models (LLMs) with no loss of accuracy.
It achieves this by shrinking the size of memory needed for training LLMs. Google specifically pointed out that TurboQuant is aimed at reducing memory costs, which have been ballooning in recent quarters due to the shortage of memory chips. Unsurprisingly, shares of memory manufacturers such as Micron Technology (NASDAQ: MU), Sandisk (NASDAQ: SNDK), and Seagate Technology (NASDAQ: STX) fell sharply after Google’s research was published.
Will AI create the world’s first trillionaire? Our team just released a report on the one little-known company, called an “Indispensable Monopoly” providing the critical technology Nvidia and Intel both need. Continue »
Investors feared that the stunning revenue and earnings growth these companies have been clocking, driven by a favorable demand-supply memory environment that is pushing up prices, could dry up due to Google’s algorithm. However, a closer look at the bigger picture suggests that Google may have supercharged the prospects of the three stocks mentioned above.
Let’s examine the reasons why these three artificial intelligence (AI) stocks could win big from TurboQuant and potentially play a key role in helping them become ideal buys for investors looking to construct million-dollar portfolios.
It remains to be seen how TurboQuant is implemented in the real world and whether it can indeed reduce memory overhead in AI data centers. But even if the technology proves successful in practice and enjoys widespread adoption (assuming Google decides to make it broadly available), it will increase memory demand.
I say this because the size of LLMs has increased exponentially in recent years. For instance, the largest LLM in 2019 had just 0.09 billion parameters, a number that shot up to 540 billion in 2022. Parameters refer to the numerical values that an LLM learns to process inputs and generate responses. So, in theory, an LLM with more parameters may have the capability to better understand inputs and generate more accurate responses.
Unsurprisingly, the latest LLMs are being trained with more than 1 trillion parameters, with some popular models exceeding half a trillion. Removing a bottleneck, such as huge memory requirements, can help AI companies train bigger, more capable models. Also, Gartner recently remarked that performing inference applications on an LLM with 1 trillion parameters could cost 90% less in 2030 than last year, driven by lower chip costs, higher chip utilization rates, and the use of more cost-effective chips.

