HBM remains in structural shortage as of Q1 2026. All three memory companies are spending tens of billions to expand, but demand is growing faster than supply can ramp. SK Group Chairman Chey Tae-won stated at GTC 2026 (March 2026) that the wafer shortage could last until 2030. The earliest realistic window for supply-demand balance is late 2027 to mid-2028, and even that depends on demand not accelerating further (which it is). HBM pricing remains firm through 2026, with risk of modest ASP declines in H2 2027 as capacity ramps -- but this is being offset by the mix shift to higher-value HBM4.
Bottom line for investors: The HBM supercycle has at least 18-24 more months of pricing power remaining (through end of 2027). The conventional DRAM supply squeeze caused by HBM wafer diversion provides a second leg of support. SK Hynix is the clear winner today; Samsung is the catch-up story with the most upside optionality; Micron is the steady gainer taking share.
| Company | HBM Market Share (Q1 2026) | Notes |
|---|---|---|
| SK Hynix | ~53-55% | Down from ~62% in mid-2025 as Samsung/Micron ramp |
| Samsung | ~25-28% | Tripled HBM revenue QoQ in Q4 2025; surging after NVIDIA qualification |
| Micron | ~18-22% | Targeting 24%+ by end of 2026; qualified with 4 major GPU/ASIC clients |
SK Hynix's share has come down from 62% in mid-2025 because Samsung finally passed NVIDIA's HBM3E 12-hi qualification (September 2025 after an 18-month delay) and Micron has expanded to four major customers. But SK Hynix remains dominant in absolute revenue terms.
| Company | Estimated HBM Wafer Capacity | Primary Products |
|---|---|---|
| SK Hynix | ~130-150K wafers/month (DRAM wafer starts allocated to HBM) | HBM3E 8-hi (24GB), HBM3E 12-hi (36GB), early HBM4 sampling |
| Samsung | ~90-110K wafers/month | HBM3E 8-hi (24GB), HBM3E 12-hi (36GB), HBM4 in qualification |
| Micron | ~50-60K wafers/month | HBM3E 8-hi (24GB), HBM3E 12-hi (36GB) |
| Industry Total | ~270-320K wafers/month |
Important note on wafer conversion: Each HBM stack requires multiple DRAM die. An 8-hi HBM3E stack uses 8 DRAM die + 1 logic/buffer die. A 12-hi stack uses 12 DRAM die + 1 buffer die. Because the die must be thinned (to ~30-40 microns), tested, and then stacked with TSVs (through-silicon vias), the effective DRAM bit capacity consumed per wafer is roughly 2.5-3x what a conventional DRAM wafer would produce. So 300K wafers/month allocated to HBM removes the equivalent of ~750-900K wafers/month of conventional DRAM bit supply.
| Product | Approximate ASP per Stack | ASP per GB | Notes |
|---|---|---|---|
| HBM3E 8-hi (24GB) | $300-350 | ~$13-15/GB | Mainstream for B200 |
| HBM3E 12-hi (36GB) | $500-600 | ~$14-17/GB | Premium for B300/Blackwell Ultra |
| HBM4 (sampling) | Not yet in volume | Expected >$700/stack | 2026 qualification phase |
For comparison, conventional DDR5 DRAM trades at roughly $2-3/GB. HBM commands a 5-7x premium per GB due to the stacking, TSV, and advanced packaging complexity.
Current Base:
Expansion Plans:
| Project | Investment | Timeline | Capacity Impact |
|---|---|---|---|
| Icheon M17 Fab | Part of ongoing capex | Already in production ramp (2025-2026) | Incremental HBM3E/HBM4 capacity |
| Yongin Semiconductor Cluster | KRW 21.6 trillion (~$16B) -- board approved Feb 2026 | Construction start: 2026; Phase 1 production: Late 2027 / Early 2028; Full ramp: 2028-2029 | Massive -- designed as a dedicated HBM/advanced DRAM mega-fab |
| Capex increase | 2025 capex raised ~30% YoY on 2026 demand visibility | Ongoing through 2026 | Converting existing lines from conventional DRAM to HBM |
| Packaging expansion | Expanding advanced packaging (MR-MUF) capacity | Throughout 2026-2027 | Bottleneck relief -- packaging has been limiting HBM output |
Technology Roadmap:
Key Insight: The $16B Yongin cluster is the single largest announced HBM capacity investment. But it does NOT produce meaningful volume until late 2027 at the earliest. The near-term supply increase for SK Hynix is incremental -- optimizing existing fabs and converting conventional DRAM lines. SK Hynix has said its 2025 capex was raised 30% YoY specifically because they have demand visibility for 2026 HBM, not speculative capacity. They are also trying to stabilize memory prices, per Chairman Chey's comments -- they do NOT want a price crash.
Capex Estimate: SK Hynix total 2025 capex was approximately $12-13B (KRW 16-17T), with 2026 expected to be higher. The Yongin cluster alone is $16B spread over multiple years.
Current Base:
The Samsung Catch-Up Story:
Samsung was the HBM laggard through most of 2024-2025. Their HBM3E chips failed NVIDIA's thermal and reliability tests repeatedly, leading to an 18-month qualification delay. They finally passed in September 2025 and began shipping to NVIDIA in Q3 2025. Since then, they have been in aggressive ramp mode and sold out their 2026 supply.
Expansion Plans:
| Project | Investment | Timeline | Capacity Impact |
|---|---|---|---|
| Pyeongtaek P4 Fab | Multi-billion (part of Samsung's larger semiconductor capex) | Phase 1 entering production in 2025-2026; Phase 2 (hybrid HBM/advanced DRAM) in 2026-2027 | Significant -- Samsung's flagship new fab. P4 Phase 1 initially planned for foundry but pivoted to include memory/HBM lines |
| 50% HBM Capacity Increase | Part of 2026 capex plan | Throughout 2026 | Samsung plans to increase total HBM production capacity by ~50% in 2026 vs. 2025 |
| Pyeongtaek P5 Fab | Planning stage | Construction likely starts 2026-2027; production 2028+ | Next-generation capacity for HBM4/HBM4E |
| HBM4 Leapfrog | Heavy R&D investment | HBM4 qualification with NVIDIA in 2026; mass production targeted for H2 2026 or Q1 2027 | Samsung is betting on being first-to-market with HBM4 to regain share |
Technology Roadmap:
Key Insight: Samsung is the wildcard. They lost 1.5 years of HBM market share due to yield/qualification issues. Their 50% capacity increase in 2026 is the single largest year-over-year capacity jump among the three companies. If their HBM4 qualification succeeds before SK Hynix's, they could meaningfully close the market share gap. However, Samsung has a track record of over-promising on HBM timelines, so this deserves skepticism. The P4 fab pivot (from foundry-focused to hybrid memory) shows how aggressively they are chasing HBM.
Capex Estimate: Samsung's semiconductor capex for 2025 was approximately $35-40B total (memory + foundry), with 2026 expected at a similar or higher level. The memory/HBM portion is estimated at $15-20B.
Current Base:
Expansion Plans:
| Project | Investment | Timeline | Capacity Impact |
|---|---|---|---|
| New Hiroshima Fab | $9.6B (~JPY 1.5 trillion) | Announced December 2025; construction through 2026-2027; production start targeted for 2027 | Major expansion -- dedicated to advanced DRAM/HBM production. Will house 1-gamma and beyond nodes |
| Singapore Packaging Expansion | Part of ongoing capex | Throughout 2026 | Expanding advanced packaging capacity for HBM stacking (this was a bottleneck for Micron) |
| Boise / US Expansion | Benefiting from CHIPS Act subsidies (~$6.1B) | Multi-year buildout | Primarily for conventional DRAM/NAND but frees up Asian capacity for HBM |
| Market Share Target | 24%+ by end of 2026 | Ongoing ramp | From ~18-20% in early 2026 to 24%+ by year-end |
Technology Roadmap:
Key Insight: Micron is the smallest of the three in HBM but growing the fastest in percentage terms. The $9.6B Hiroshima fab is a transformative investment -- it nearly doubles Micron's advanced DRAM capacity in Japan. However, like Yongin for SK Hynix, this does not produce meaningful volume until 2027. Micron's near-term HBM growth comes from converting existing lines and expanding Singapore packaging. Micron's advantage is its strong relationship with multiple customers (4 qualified clients, diversifying beyond just NVIDIA) and its 1-beta node leadership which gives it a die-size advantage.
Capex Estimate: Micron FY2026 capex guided at approximately $14-16B total.
| Company | 2024 (Actual) | 2025 (Actual/Est) | 2026 (Forecast) | 2027 (Forecast) | 2028 (Forecast) |
|---|---|---|---|---|---|
| SK Hynix | ~$12-13B | ~$22-25B | ~$30-35B | ~$38-42B | ~$42-48B |
| Samsung | ~$3-4B | ~$7-9B | ~$14-18B | ~$22-28B | ~$28-35B |
| Micron | ~$2-3B | ~$5-7B | ~$10-13B | ~$16-20B | ~$20-25B |
| Total | ~$17-20B | ~$34-41B | ~$54-66B | ~$76-90B | ~$90-108B |
| Metric | 2024 | 2025 | 2026 | 2027 | 2028 |
|---|---|---|---|---|---|
| Total HBM bits produced (relative, 2024=100) | 100 | ~200-220 | ~350-400 | ~550-650 | ~800-950 |
| HBM as % of total DRAM wafer starts | ~10-12% | ~17-19% | ~22-25% | ~28-32% | ~32-38% |
| Equivalent stacks produced (millions, est.) | ~60-70M | ~120-140M | ~200-240M | ~300-360M | ~400-480M |
| Customer | HBM per Unit | Units (2026 Est.) | Total HBM Demand (2026) | 2027 Projection |
|---|---|---|---|---|
| NVIDIA B200 | 8 stacks (192GB HBM3E 8-hi) | ~2-3M GPUs | ~16-24M stacks | Declining (replaced by B300) |
| NVIDIA B300 (Blackwell Ultra) | 8 stacks (288GB HBM3E 12-hi) | ~1-2M GPUs (ramping H2 2026) | ~8-16M stacks | ~3-4M GPUs = 24-32M stacks |
| NVIDIA Rubin (R100) | 8 stacks (HBM4) | Sampling 2026, volume 2027 | Minimal | ~1-2M GPUs = 8-16M stacks |
| AMD MI350X | 8 stacks (288GB HBM3E 12-hi) | ~200-400K | ~1.6-3.2M stacks | Continuing |
| AMD MI400 | Up to 12 stacks (432GB HBM4) | Late 2026 sampling | Minimal | Ramping |
| Google TPU v6/v7 | 4-8 stacks per chip | Tens of thousands | ~2-4M stacks | Growing rapidly |
| Amazon Trainium 3 | 4-8 stacks per chip | Ramping 2026 | ~1-3M stacks | Growing rapidly |
| Microsoft Maia | 4-8 stacks per chip | Ramping | ~1-2M stacks | Growing |
| Meta MTIA v3 | Multiple stacks | Ramping | ~1-2M stacks | Growing |
| Other (Intel, Broadcom ASICs, etc.) | Varies | Various | ~2-5M stacks | Growing |
| TOTAL DEMAND | ~35-55M stacks (est.) | ~50-75M stacks |
Critical note on ASIC demand growth: TrendForce reported in July 2025 that HBM demand from custom ASICs (Google TPU, Amazon Trainium, etc.) is expected to surge 80% in 2026. This is the demand vector most analysts underestimate. Every hyperscaler is designing their own AI chips, and every one of them needs HBM. This is additive to the GPU demand, not substitutional.
| Year | Supply (Stacks, M) | Demand (Stacks, M) | Balance | Price Implication |
|---|---|---|---|---|
| 2024 | ~60-70M | ~65-75M | Moderate shortage | Prices stable/rising |
| 2025 | ~120-140M | ~140-170M | Shortage | Prices up 5-10% |
| 2026 | ~200-240M | ~230-280M | Still short | Prices stable to slight softening in H2 |
| 2027 | ~300-360M | ~320-400M | Approaching balance | Modest ASP decline risk (~5-15%), offset by mix shift to HBM4 |
| 2028 | ~400-480M | ~420-520M | Balanced to slight oversupply possible | Price pressure begins if demand growth slows |
When does supply catch demand? Based on current trajectories:
The key insight: Even in the base case, HBM "oversupply" looks very different from conventional DRAM oversupply. HBM has long-term contracts (12-18 month agreements), limited number of customers (NVIDIA alone is probably 50%+ of demand), and a technology upgrade cycle (HBM3E to HBM4) that resets pricing. This is NOT the commodity DRAM market.
This is one of the most under-appreciated dynamics in the memory industry.
As of 2026, HBM consumes approximately 22-25% of total industry DRAM wafer starts. But here is the critical math:
So when the industry allocates 25% of wafer starts to HBM, the effective conventional DRAM bit supply reduction is much larger:
| Metric | 2024 | 2025 | 2026 | 2027 |
|---|---|---|---|---|
| DRAM wafers allocated to HBM | ~10-12% | ~17-19% | ~22-25% | ~28-32% |
| Effective conventional DRAM bit supply reduction | ~7-8% | ~11-13% | ~15-17% | ~19-22% |
| Conventional DRAM bit demand growth (PC, server, mobile) | ~10-15% | ~12-15% | ~10-15% | ~10-15% |
| Net conventional DRAM supply/demand | Tight | Very tight | Shortage | Potential acute shortage |
This is already playing out in 2026. Reports indicate DDR5 prices are rising due to wafer diversion. The memory chip shortage of 2026 is being driven not by a demand surge for PCs/phones but by the supply-side effect of HBM eating wafer capacity.
This is why the memory cycle is different this time:
The conventional DRAM squeeze is a natural hedge against HBM oversupply risk. The memory companies (especially SK Hynix) are explicitly aware of this and are managing the transition carefully. SK Group Chairman Chey's statement about "trying to stabilize memory prices" is a direct reference to this dynamic -- they are NOT going to flood the market with HBM capacity at the expense of cratering conventional DRAM.
HBM stacks are useless without advanced packaging. Every AI GPU needs its HBM stacks integrated onto a substrate alongside the GPU die using TSMC's CoWoS (Chip-on-Wafer-on-Substrate) or similar advanced packaging technology.
Current Status (Q1 2026):
Expansion Timeline:
| Period | CoWoS Monthly Capacity (Est.) | Notes |
|---|---|---|
| Q4 2024 | ~20,000 wafers/month | Severe constraint |
| Q4 2025 | ~30,000-35,000 | Rapid expansion but still tight |
| Q4 2026 | ~45,000-55,000 | Outsourcing helping; new lines online |
| Q4 2027 | ~60,000-75,000 | AP6, AP7, and OSAT expansion |
Key insight: CoWoS is the pacing bottleneck, NOT HBM DRAM production. Even if Samsung, SK Hynix, and Micron can produce more HBM stacks, those stacks sit in inventory if TSMC cannot package them onto GPU substrates. This is why TSMC is outsourcing packaging for the first time -- a historic move that shows just how dire the constraint is.
The memory companies themselves have their own packaging bottleneck:
| Stack Height | Yield Challenge | Estimated Stack Yield (Good Stack / Attempted) |
|---|---|---|
| HBM3E 8-hi | Mature, well-optimized | ~80-90% (SK Hynix), ~70-80% (Samsung), ~75-85% (Micron) |
| HBM3E 12-hi | Significant challenge | ~60-75% (SK Hynix), ~55-70% (Samsung), ~60-70% (Micron) |
| HBM4 8-hi | New architecture + hybrid bonding | ~50-65% initially, improving |
| HBM4 12-hi | Extremely challenging | Expected ~40-55% initially |
| HBM4 16-hi | Frontier | Likely <50% initially |
The yield math matters enormously for supply: If HBM3E 12-hi yields are 65%, that means 35% of all stacking attempts produce defective stacks. This is 35% of capacity that is wasted. A 10-percentage-point improvement in yield (65% to 75%) is equivalent to ~15% more supply from the same installed capacity. Yield improvement is the single fastest way to increase HBM supply without building new fabs.
Ranking of bottlenecks (most severe to least severe, as of Q1 2026):
Net assessment: The bottleneck sequence is: CoWoS (2024-2026) --> HBM packaging/yield (2026-2027) --> Wafer capacity (2027-2028). At no point in the next 2 years does every bottleneck ease simultaneously. This is why structural shortage persists.
| Period | Pricing Dynamic | Confidence |
|---|---|---|
| H1 2026 | Prices firm. All supply sold out. No meaningful pressure. | Very High |
| H2 2026 | Prices stable to slightly lower on HBM3E as capacity ramps. HBM4 pricing premium offsets. | High |
| H1 2027 | First real window for modest HBM3E price declines (5-10%). HBM4 ramp supports blended ASPs. | Medium-High |
| H2 2027 | Supply approaching demand. HBM3E prices decline 10-15%. HBM4 still premium. Blended ASP flat to down slightly. | Medium |
| 2028 | Risk of meaningful oversupply IF demand growth slows. If AI capex continues growing, supply stays balanced. | Medium-Low (high uncertainty) |
When to worry: The earliest investors should begin monitoring for HBM pricing pressure is Q3-Q4 2027. The trigger would be: (a) memory companies reporting HBM inventory build, (b) contract prices declining quarter-over-quarter, (c) AI capex growth decelerating (watch hyperscaler capex guidance).
The single most important leading indicator: Listen to SK Hynix earnings calls for any change in the language around "sold out." As long as they say HBM capacity is "sold out" for the next 12+ months, the cycle is intact. The moment they say something like "we have good visibility" instead of "sold out," pricing pressure is approaching.
Most analysts are focused on "when does HBM supply catch demand" as a linear projection. Here is why that framing may be wrong:
| Timeframe | Thesis | Position |
|---|---|---|
| Now - End 2026 | HBM supercycle intact, shortage continues, earnings will beat | Full conviction position in SK Hynix / Micron |
| 2027 | Transition year. HBM3E may see modest price pressure. HBM4 ramp creates new opportunity. Conventional DRAM supports. | Maintain position, watch for triggers |
| 2028+ | High uncertainty. Depends on AI demand trajectory. If AGI development accelerates (which is the operating assumption), memory demand is insatiable. | Monitor; trim if triggers fire |
Given the AGI premise: memory demand only goes up. Recursive self-improvement means ever-larger models, ever-more inference, ever-more GPUs, ever-more HBM. The memory companies are building capacity for a world where AI compute demand grows 10x in 5 years. If anything, they are UNDER-building. The risk is not a demand shortfall -- it is that they cannot build capacity fast enough.
| Data Point | Value | Source |
|---|---|---|
| SK Hynix Q1 2026 operating margin | ~72% | SK Hynix Q1 2026 earnings |
| SK Hynix HBM market share (mid-2025) | ~62% | Industry reports |
| SK Hynix Yongin investment | KRW 21.6T (~$16B) | SK Hynix board approval, Feb 2026 |
| Samsung 2026 HBM capacity increase | ~50% YoY | TrendForce, Dec 2025 |
| Samsung HBM3E 12-hi NVIDIA qualification | September 2025 (after 18-month delay) | TrendForce, Sep 2025 |
| Samsung Q1 2026 profit surge | ~700-800% YoY (semiconductor division) | Samsung Q1 2026 earnings |
| Micron new Hiroshima fab | $9.6B investment | TrendForce, Dec 2025 |
| Micron HBM market share target | 24%+ by end of 2026 | TrendForce, Jun 2025 |
| Micron qualified clients | 4 major GPU/ASIC clients | Micron earnings |
| HBM as % of DRAM wafers (2026) | ~22-25% | TrendForce, industry estimates |
| ASIC HBM demand growth (2026) | +80% YoY | TrendForce, Jul 2025 |
| SK Group Chairman wafer shortage forecast | Until 2030 | GTC 2026, Mar 2026 |
| HBM3E 8-hi price per stack | ~$300-350 | Industry estimates |
| HBM3E 12-hi price per stack | ~$500-600 | Industry estimates |
| NVIDIA B300 HBM requirement | 8x HBM3E 12-hi (288GB) | NVIDIA specs |
| AMD MI400 HBM requirement | Up to 432GB HBM4 | AMD announcements |
| TSMC CoWoS capacity (2026) | ~45,000-55,000 wafers/month target | TSMC disclosures |
This analysis reflects data available as of April 23, 2026. The HBM market is evolving rapidly. Key earnings to watch: SK Hynix (quarterly), Samsung (quarterly), Micron (quarterly on different fiscal calendar), NVIDIA (for demand signals), and hyperscaler capex disclosures (Google, Microsoft, Amazon, Meta).