Written by: Techub News Compilation
In the latest episode of Bankless, hosts Ejaaz A. and Josh conducted an in-depth discussion on the current hot AI memory investment space. With NVIDIA becoming the world’s highest-valued company due to its GPUs, a similarly critical yet possibly overlooked hardware—memory—is undergoing a structural transformation driven by AI. This conversation, based on detailed industry data and historical cycle analysis, aims to address a core question: Is the AI memory craze a fleeting bubble, or a historic investment opportunity with sustained growth potential?
AI Reshapes Memory: From “Bad Business” to “Core Bottleneck”
For the past forty years, the memory industry has been seen as a typical “bad business.” The three major suppliers—Samsung, SK Hynix, and Micron—produce almost homogenized products, with the industry trapped in a severe “pig cycle”: price increases stimulate capital expenditure and capacity expansion, followed by oversupply leading to price collapse, repeating itself endlessly.
However, the emergence of AI has fundamentally reversed this situation. The hosts pointed out that memory has become 50% of the material costs in building AI models, and with NVIDIA’s iterations from Blackwell to future Rubin and Feynman architectures, demand for memory from each generation of GPUs will grow by 2x, 3x, or even 4x. AI has created an almost “insatiable” demand for memory.
More critically, AI has spawned a brand-new memory architecture: High Bandwidth Memory (HBM). Josh explained with a vivid metaphor: traditional memory sticks are like single-layer warehouses, while HBM is a “skyscraper warehouse” with memory chips stacked vertically 8 to 12 layers (with plans to reach 16 layers in the future). This design can be placed closely to the GPU, providing model weights and data at extremely high speeds, which directly determines the speed and scale of model operations.
The manufacturing process of HBM is incredibly complex and precise, severely limiting its capacity expansion speed. Currently, only the three major giants can mass-produce HBM, and to meet the insane demand from AI, they have even stopped some consumer-grade memory production to funnel all resources into AI memory. The rigidity of supply constraints is unprecedented.
Demand Engine Dual Drive: From Training to Inference, From Single-Point Dialogue to Agent Networks
In addition to structural changes at the hardware level, the evolution of software paradigms is adding new and stronger engines for memory demand.
Firstly, AI's focus is shifting from training to inference. When users ask ChatGPT or Claude a question, this process is inference. A more cutting-edge trend is the rise of AI agents. These agents can engage in dialogue and collaborate to produce better outputs, which involves massive inferential computations. Ejaaz cited analyst Ben Thompson's viewpoint, indicating that in a future economy supported by AI agents, we may need 10 to 50 times more memory than we do now.
Secondly, the agent paradigm itself consumes more memory. Josh explained that each query sent to a large language model generates a “KV cache” to store context. In an agent scenario, each running agent requires an independent KV cache instance. If you run 20 agents simultaneously, you need 20 memory instances. This growth in software-level demand, combined with the supply constraints of HBM, has significantly elongated the demand window.
Currently, the delivery period for HBM has been booked until the end of 2027. Ejaaz revealed that some large AI clients have even proactively offered to purchase or fund production equipment for companies like Micron in exchange for guaranteed capacity in 2027-2028. This “pre-paid” locking of capacity is extremely rare in past memory cycles, indicating that the demand is genuine and urgent.
Investment Targets: Three Giants and a “One-Stop” ETF Tool
Faced with clear demand and limited supply, how should investors participate? The dialogue focused on three core players and emerging financial tools.
- SK Hynix: Seen as the current “king.” It is the primary or even exclusive supplier of HBM for NVIDIA, and its capacity has been largely locked in by NVIDIA, akin to TSMC’s position for NVIDIA.
- Samsung Electronics: An important second-tier player. However, due to its diversified business, it is not purely a memory stock.
- Micron Technology: The main choice for U.S. investors. As a domestic memory giant, Micron is also one of the advanced HBM suppliers, and its stock price has risen over eight times in the past year.
For Western investors, there are certain barriers to directly investing in stocks of Korean-listed companies. Therefore, a product named DRAM ETF has emerged, skyrocketing to over $6 billion in scale within six weeks, referred to by Goldman Sachs as one of the fastest-growing ETFs in history. The core holdings of this ETF are the aforementioned three companies (accounting for about 70%), providing investors with a convenient tool for basket investment in AI memory leaders.
Josh mentioned that there is even a 2x leveraged long memory ETF called RAM on the market, providing a “premium” option for investors with higher risk tolerance. However, he personally prefers the non-leveraged DRAM ETF.
Historical Cycles and Current Divergence: “Is This Time Really Different?”
Despite the strong logic, investors' biggest concern is: memory stocks have already risen significantly; is this a bubble? Will historical cycles repeat?
The memory industry has historically experienced a rise and fall cycle approximately every 18 months, with increases often followed by corrections of 40%-75%. The current cycle began in the fourth quarter of 2024, and we are now in the critical 18-month time window. From the charts, prices are at resistance levels.
Ejaaz admitted that in the short term, the market might experience a pullback due to the exhaustion of positive news. However, he remains optimistic in the long run, with core support coming from those real orders with prepaid funds. These orders come from tech giants with strong cash flows (such as the “Magnificent Seven” in the U.S.), rather than unqualified companies, greatly enhancing the credibility of demand.
From a valuation perspective, these memory giants have a forward P/E ratio between 5 and 12, far lower than Tesla (40-50) and even below Microsoft and Amazon (15-20). Compared to their earnings guidance, stock prices have not been excessively overvalued. In contrast, during past bubble periods, P/E ratios often reached 30-50.
Of course, risks still exist. Skeptics argue that if more efficient AI models or GPU architectures that require less memory emerge, or if new capacity comes online leading to oversupply, the market could reverse. Moreover, the surge in memory prices has begun to transmit to the consumer electronics sector, causing products like personal computers and gaming consoles to become more expensive or delayed in release, which could ultimately dampen demand on the consumer end.
However, the hosts' conclusion tends to be optimistic. They believe that even if consumer demand is pressured, the enterprise demand for AI models and agents will continue to “only increase.” In a rapid-fire Q&A session at the end of the show, both agreed that memory stock prices will be higher in 12 months. The revolution in AI's demand for memory, in depth and breadth, might just be getting started, while the supply side's catch-up will take years. This imbalance in memory supply and demand triggered by AI may indeed be writing a new story different from the past forty years.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。