HBM Capacity Timeline: When Does Supply Catch Demand?

Last Updated: April 23, 2026  •  Author: Semiconductor Supply Chain Analysis  •  Type: Sector Report

Table of Contents
  1. Executive Summary
  2. Current State (Q1 2026)
    1. Market Share
    2. Current HBM Production Rates
    3. Current HBM Pricing
    4. Latest Earnings Snapshots
  3. Company-by-Company Expansion Plans
    1. SK Hynix
    2. Samsung
    3. Micron
  4. Total Industry Supply Forecast
    1. HBM Supply by Company
    2. HBM Supply by Capacity
    3. HBM Demand by Customer Segment
    4. Supply vs. Demand Balance
  5. The Conventional DRAM Supply Squeeze
    1. The Wafer Diversion Math
    2. Impact on Conventional DRAM
    3. The Self-Hedging Dynamic
  6. Bottlenecks
    1. TSMC CoWoS Packaging
    2. HBM Stacking / Advanced Packaging at Memory Companies
    3. Yield Challenges: 12-Hi and Beyond
    4. Which Bottleneck Breaks First?
  7. Investment Implications
    1. Timeline for HBM Pricing Pressure
    2. Which Company Benefits Most?
    3. Key Triggers to Watch
    4. The Non-Consensus View
    5. Summary: Position Sizing Guidance
  8. Appendix: Key Data Points by Source

Executive Summary

HBM remains in structural shortage as of Q1 2026. All three memory companies are spending tens of billions to expand, but demand is growing faster than supply can ramp. SK Group Chairman Chey Tae-won stated at GTC 2026 (March 2026) that the wafer shortage could last until 2030. The earliest realistic window for supply-demand balance is late 2027 to mid-2028, and even that depends on demand not accelerating further (which it is). HBM pricing remains firm through 2026, with risk of modest ASP declines in H2 2027 as capacity ramps -- but this is being offset by the mix shift to higher-value HBM4.

Bottom line for investors: The HBM supercycle has at least 18-24 more months of pricing power remaining (through end of 2027). The conventional DRAM supply squeeze caused by HBM wafer diversion provides a second leg of support. SK Hynix is the clear winner today; Samsung is the catch-up story with the most upside optionality; Micron is the steady gainer taking share.


1. Current State (Q1 2026)

1.1 Market Share (by HBM Revenue)

CompanyHBM Market Share (Q1 2026)Notes
SK Hynix~53-55%Down from ~62% in mid-2025 as Samsung/Micron ramp
Samsung~25-28%Tripled HBM revenue QoQ in Q4 2025; surging after NVIDIA qualification
Micron~18-22%Targeting 24%+ by end of 2026; qualified with 4 major GPU/ASIC clients

SK Hynix's share has come down from 62% in mid-2025 because Samsung finally passed NVIDIA's HBM3E 12-hi qualification (September 2025 after an 18-month delay) and Micron has expanded to four major customers. But SK Hynix remains dominant in absolute revenue terms.

1.2 Current HBM Production Rates (Estimated, Q1 2026)

CompanyEstimated HBM Wafer CapacityPrimary Products
SK Hynix~130-150K wafers/month (DRAM wafer starts allocated to HBM)HBM3E 8-hi (24GB), HBM3E 12-hi (36GB), early HBM4 sampling
Samsung~90-110K wafers/monthHBM3E 8-hi (24GB), HBM3E 12-hi (36GB), HBM4 in qualification
Micron~50-60K wafers/monthHBM3E 8-hi (24GB), HBM3E 12-hi (36GB)
Industry Total~270-320K wafers/month

Important note on wafer conversion: Each HBM stack requires multiple DRAM die. An 8-hi HBM3E stack uses 8 DRAM die + 1 logic/buffer die. A 12-hi stack uses 12 DRAM die + 1 buffer die. Because the die must be thinned (to ~30-40 microns), tested, and then stacked with TSVs (through-silicon vias), the effective DRAM bit capacity consumed per wafer is roughly 2.5-3x what a conventional DRAM wafer would produce. So 300K wafers/month allocated to HBM removes the equivalent of ~750-900K wafers/month of conventional DRAM bit supply.

1.3 Current HBM Pricing (Q1 2026)

ProductApproximate ASP per StackASP per GBNotes
HBM3E 8-hi (24GB)$300-350~$13-15/GBMainstream for B200
HBM3E 12-hi (36GB)$500-600~$14-17/GBPremium for B300/Blackwell Ultra
HBM4 (sampling)Not yet in volumeExpected >$700/stack2026 qualification phase

For comparison, conventional DDR5 DRAM trades at roughly $2-3/GB. HBM commands a 5-7x premium per GB due to the stacking, TSV, and advanced packaging complexity.

1.4 Latest Earnings Snapshots

SK Hynix Q1 2026 (reported April 24, 2026):

Samsung Q1 2026 (reported April 2026):

Micron FQ2 2026 (reported March 18, 2026):


2. Company-by-Company Expansion Plans

2.1 SK Hynix

Current Base:

Expansion Plans:

ProjectInvestmentTimelineCapacity Impact
Icheon M17 FabPart of ongoing capexAlready in production ramp (2025-2026)Incremental HBM3E/HBM4 capacity
Yongin Semiconductor ClusterKRW 21.6 trillion (~$16B) -- board approved Feb 2026Construction start: 2026; Phase 1 production: Late 2027 / Early 2028; Full ramp: 2028-2029Massive -- designed as a dedicated HBM/advanced DRAM mega-fab
Capex increase2025 capex raised ~30% YoY on 2026 demand visibilityOngoing through 2026Converting existing lines from conventional DRAM to HBM
Packaging expansionExpanding advanced packaging (MR-MUF) capacityThroughout 2026-2027Bottleneck relief -- packaging has been limiting HBM output

Technology Roadmap:

Key Insight: The $16B Yongin cluster is the single largest announced HBM capacity investment. But it does NOT produce meaningful volume until late 2027 at the earliest. The near-term supply increase for SK Hynix is incremental -- optimizing existing fabs and converting conventional DRAM lines. SK Hynix has said its 2025 capex was raised 30% YoY specifically because they have demand visibility for 2026 HBM, not speculative capacity. They are also trying to stabilize memory prices, per Chairman Chey's comments -- they do NOT want a price crash.

Capex Estimate: SK Hynix total 2025 capex was approximately $12-13B (KRW 16-17T), with 2026 expected to be higher. The Yongin cluster alone is $16B spread over multiple years.


2.2 Samsung

Current Base:

The Samsung Catch-Up Story:

Samsung was the HBM laggard through most of 2024-2025. Their HBM3E chips failed NVIDIA's thermal and reliability tests repeatedly, leading to an 18-month qualification delay. They finally passed in September 2025 and began shipping to NVIDIA in Q3 2025. Since then, they have been in aggressive ramp mode and sold out their 2026 supply.

Expansion Plans:

ProjectInvestmentTimelineCapacity Impact
Pyeongtaek P4 FabMulti-billion (part of Samsung's larger semiconductor capex)Phase 1 entering production in 2025-2026; Phase 2 (hybrid HBM/advanced DRAM) in 2026-2027Significant -- Samsung's flagship new fab. P4 Phase 1 initially planned for foundry but pivoted to include memory/HBM lines
50% HBM Capacity IncreasePart of 2026 capex planThroughout 2026Samsung plans to increase total HBM production capacity by ~50% in 2026 vs. 2025
Pyeongtaek P5 FabPlanning stageConstruction likely starts 2026-2027; production 2028+Next-generation capacity for HBM4/HBM4E
HBM4 LeapfrogHeavy R&D investmentHBM4 qualification with NVIDIA in 2026; mass production targeted for H2 2026 or Q1 2027Samsung is betting on being first-to-market with HBM4 to regain share

Technology Roadmap:

Key Insight: Samsung is the wildcard. They lost 1.5 years of HBM market share due to yield/qualification issues. Their 50% capacity increase in 2026 is the single largest year-over-year capacity jump among the three companies. If their HBM4 qualification succeeds before SK Hynix's, they could meaningfully close the market share gap. However, Samsung has a track record of over-promising on HBM timelines, so this deserves skepticism. The P4 fab pivot (from foundry-focused to hybrid memory) shows how aggressively they are chasing HBM.

Capex Estimate: Samsung's semiconductor capex for 2025 was approximately $35-40B total (memory + foundry), with 2026 expected at a similar or higher level. The memory/HBM portion is estimated at $15-20B.


2.3 Micron

Current Base:

Expansion Plans:

ProjectInvestmentTimelineCapacity Impact
New Hiroshima Fab$9.6B (~JPY 1.5 trillion)Announced December 2025; construction through 2026-2027; production start targeted for 2027Major expansion -- dedicated to advanced DRAM/HBM production. Will house 1-gamma and beyond nodes
Singapore Packaging ExpansionPart of ongoing capexThroughout 2026Expanding advanced packaging capacity for HBM stacking (this was a bottleneck for Micron)
Boise / US ExpansionBenefiting from CHIPS Act subsidies (~$6.1B)Multi-year buildoutPrimarily for conventional DRAM/NAND but frees up Asian capacity for HBM
Market Share Target24%+ by end of 2026Ongoing rampFrom ~18-20% in early 2026 to 24%+ by year-end

Technology Roadmap:

Key Insight: Micron is the smallest of the three in HBM but growing the fastest in percentage terms. The $9.6B Hiroshima fab is a transformative investment -- it nearly doubles Micron's advanced DRAM capacity in Japan. However, like Yongin for SK Hynix, this does not produce meaningful volume until 2027. Micron's near-term HBM growth comes from converting existing lines and expanding Singapore packaging. Micron's advantage is its strong relationship with multiple customers (4 qualified clients, diversifying beyond just NVIDIA) and its 1-beta node leadership which gives it a die-size advantage.

Capex Estimate: Micron FY2026 capex guided at approximately $14-16B total.


3. Total Industry Supply Forecast

3.1 HBM Supply by Company (Revenue, $B)

Company2024 (Actual)2025 (Actual/Est)2026 (Forecast)2027 (Forecast)2028 (Forecast)
SK Hynix~$12-13B~$22-25B~$30-35B~$38-42B~$42-48B
Samsung~$3-4B~$7-9B~$14-18B~$22-28B~$28-35B
Micron~$2-3B~$5-7B~$10-13B~$16-20B~$20-25B
Total~$17-20B~$34-41B~$54-66B~$76-90B~$90-108B

3.2 HBM Supply by Capacity (Estimated Total Industry Bit Production)

Metric20242025202620272028
Total HBM bits produced (relative, 2024=100)100~200-220~350-400~550-650~800-950
HBM as % of total DRAM wafer starts~10-12%~17-19%~22-25%~28-32%~32-38%
Equivalent stacks produced (millions, est.)~60-70M~120-140M~200-240M~300-360M~400-480M

3.3 HBM Demand by Customer Segment

CustomerHBM per UnitUnits (2026 Est.)Total HBM Demand (2026)2027 Projection
NVIDIA B2008 stacks (192GB HBM3E 8-hi)~2-3M GPUs~16-24M stacksDeclining (replaced by B300)
NVIDIA B300 (Blackwell Ultra)8 stacks (288GB HBM3E 12-hi)~1-2M GPUs (ramping H2 2026)~8-16M stacks~3-4M GPUs = 24-32M stacks
NVIDIA Rubin (R100)8 stacks (HBM4)Sampling 2026, volume 2027Minimal~1-2M GPUs = 8-16M stacks
AMD MI350X8 stacks (288GB HBM3E 12-hi)~200-400K~1.6-3.2M stacksContinuing
AMD MI400Up to 12 stacks (432GB HBM4)Late 2026 samplingMinimalRamping
Google TPU v6/v74-8 stacks per chipTens of thousands~2-4M stacksGrowing rapidly
Amazon Trainium 34-8 stacks per chipRamping 2026~1-3M stacksGrowing rapidly
Microsoft Maia4-8 stacks per chipRamping~1-2M stacksGrowing
Meta MTIA v3Multiple stacksRamping~1-2M stacksGrowing
Other (Intel, Broadcom ASICs, etc.)VariesVarious~2-5M stacksGrowing
TOTAL DEMAND~35-55M stacks (est.)~50-75M stacks

Critical note on ASIC demand growth: TrendForce reported in July 2025 that HBM demand from custom ASICs (Google TPU, Amazon Trainium, etc.) is expected to surge 80% in 2026. This is the demand vector most analysts underestimate. Every hyperscaler is designing their own AI chips, and every one of them needs HBM. This is additive to the GPU demand, not substitutional.

3.4 Supply vs. Demand Balance

YearSupply (Stacks, M)Demand (Stacks, M)BalancePrice Implication
2024~60-70M~65-75MModerate shortagePrices stable/rising
2025~120-140M~140-170MShortagePrices up 5-10%
2026~200-240M~230-280MStill shortPrices stable to slight softening in H2
2027~300-360M~320-400MApproaching balanceModest ASP decline risk (~5-15%), offset by mix shift to HBM4
2028~400-480M~420-520MBalanced to slight oversupply possiblePrice pressure begins if demand growth slows

When does supply catch demand? Based on current trajectories:

The key insight: Even in the base case, HBM "oversupply" looks very different from conventional DRAM oversupply. HBM has long-term contracts (12-18 month agreements), limited number of customers (NVIDIA alone is probably 50%+ of demand), and a technology upgrade cycle (HBM3E to HBM4) that resets pricing. This is NOT the commodity DRAM market.


4. The Conventional DRAM Supply Squeeze

4.1 The Wafer Diversion Math

This is one of the most under-appreciated dynamics in the memory industry.

As of 2026, HBM consumes approximately 22-25% of total industry DRAM wafer starts. But here is the critical math:

So when the industry allocates 25% of wafer starts to HBM, the effective conventional DRAM bit supply reduction is much larger:

4.2 Impact on Conventional DRAM

Metric2024202520262027
DRAM wafers allocated to HBM~10-12%~17-19%~22-25%~28-32%
Effective conventional DRAM bit supply reduction~7-8%~11-13%~15-17%~19-22%
Conventional DRAM bit demand growth (PC, server, mobile)~10-15%~12-15%~10-15%~10-15%
Net conventional DRAM supply/demandTightVery tightShortagePotential acute shortage

This is already playing out in 2026. Reports indicate DDR5 prices are rising due to wafer diversion. The memory chip shortage of 2026 is being driven not by a demand surge for PCs/phones but by the supply-side effect of HBM eating wafer capacity.

4.3 The Self-Hedging Dynamic

This is why the memory cycle is different this time:

The conventional DRAM squeeze is a natural hedge against HBM oversupply risk. The memory companies (especially SK Hynix) are explicitly aware of this and are managing the transition carefully. SK Group Chairman Chey's statement about "trying to stabilize memory prices" is a direct reference to this dynamic -- they are NOT going to flood the market with HBM capacity at the expense of cratering conventional DRAM.


5. Bottlenecks

5.1 TSMC CoWoS Packaging (THE Primary Bottleneck)

HBM stacks are useless without advanced packaging. Every AI GPU needs its HBM stacks integrated onto a substrate alongside the GPU die using TSMC's CoWoS (Chip-on-Wafer-on-Substrate) or similar advanced packaging technology.

Current Status (Q1 2026):

Expansion Timeline:

PeriodCoWoS Monthly Capacity (Est.)Notes
Q4 2024~20,000 wafers/monthSevere constraint
Q4 2025~30,000-35,000Rapid expansion but still tight
Q4 2026~45,000-55,000Outsourcing helping; new lines online
Q4 2027~60,000-75,000AP6, AP7, and OSAT expansion

Key insight: CoWoS is the pacing bottleneck, NOT HBM DRAM production. Even if Samsung, SK Hynix, and Micron can produce more HBM stacks, those stacks sit in inventory if TSMC cannot package them onto GPU substrates. This is why TSMC is outsourcing packaging for the first time -- a historic move that shows just how dire the constraint is.

5.2 HBM Stacking / Advanced Packaging at Memory Companies

The memory companies themselves have their own packaging bottleneck:

SK Hynix: MR-MUF (Mass Reflow Molded Underfill)

Samsung: TCB (Thermal Compression Bonding)

Hybrid Bonding (HBM4 and Beyond)

5.3 Yield Challenges: 12-Hi and Beyond

Stack HeightYield ChallengeEstimated Stack Yield (Good Stack / Attempted)
HBM3E 8-hiMature, well-optimized~80-90% (SK Hynix), ~70-80% (Samsung), ~75-85% (Micron)
HBM3E 12-hiSignificant challenge~60-75% (SK Hynix), ~55-70% (Samsung), ~60-70% (Micron)
HBM4 8-hiNew architecture + hybrid bonding~50-65% initially, improving
HBM4 12-hiExtremely challengingExpected ~40-55% initially
HBM4 16-hiFrontierLikely <50% initially

The yield math matters enormously for supply: If HBM3E 12-hi yields are 65%, that means 35% of all stacking attempts produce defective stacks. This is 35% of capacity that is wasted. A 10-percentage-point improvement in yield (65% to 75%) is equivalent to ~15% more supply from the same installed capacity. Yield improvement is the single fastest way to increase HBM supply without building new fabs.

5.4 Which Bottleneck Breaks First?

Ranking of bottlenecks (most severe to least severe, as of Q1 2026):

  1. TSMC CoWoS -- Still the #1 constraint. Being addressed aggressively but demand continues to outgrow supply. Outsourcing is a partial solution. Expected to ease meaningfully by H2 2027.
  2. HBM stacking/packaging at memory companies -- #2 constraint. MR-MUF/TCB capacity is being expanded but requires specialized equipment with long lead times. Expected to ease by mid-2027.
  3. DRAM wafer capacity -- Becoming tighter as HBM takes more share. New fabs (Yongin, Hiroshima new fab) don't produce until 2027-2028. This one actually gets WORSE before it gets better.
  4. HBM4 yield immaturity -- Will become the dominant bottleneck in 2027 as the industry transitions from HBM3E to HBM4. Early HBM4 yields will be low, constraining supply even as raw wafer capacity increases.

Net assessment: The bottleneck sequence is: CoWoS (2024-2026) --> HBM packaging/yield (2026-2027) --> Wafer capacity (2027-2028). At no point in the next 2 years does every bottleneck ease simultaneously. This is why structural shortage persists.


6. Investment Implications

6.1 Timeline for HBM Pricing Pressure

PeriodPricing DynamicConfidence
H1 2026Prices firm. All supply sold out. No meaningful pressure.Very High
H2 2026Prices stable to slightly lower on HBM3E as capacity ramps. HBM4 pricing premium offsets.High
H1 2027First real window for modest HBM3E price declines (5-10%). HBM4 ramp supports blended ASPs.Medium-High
H2 2027Supply approaching demand. HBM3E prices decline 10-15%. HBM4 still premium. Blended ASP flat to down slightly.Medium
2028Risk of meaningful oversupply IF demand growth slows. If AI capex continues growing, supply stays balanced.Medium-Low (high uncertainty)

When to worry: The earliest investors should begin monitoring for HBM pricing pressure is Q3-Q4 2027. The trigger would be: (a) memory companies reporting HBM inventory build, (b) contract prices declining quarter-over-quarter, (c) AI capex growth decelerating (watch hyperscaler capex guidance).

6.2 Which Company Benefits Most?

Near-term (2026): SK Hynix

Medium-term (2027): Samsung has the most upside optionality

Steady gainer: Micron

6.3 Key Triggers to Watch

Bullish triggers (extend the supercycle):

  1. NVIDIA Rubin (R100) shipments pull forward -- each Rubin GPU uses HBM4, creating new demand
  2. Hyperscaler ASIC demand continues surging 80%+ annually
  3. HBM4 yields come in lower than expected -- constraining supply even as wafer capacity ramps
  4. Sovereign AI programs (Middle East, Asia) create new demand pools
  5. SK Hynix/Samsung signal continued supply discipline (not racing to crash prices)

Bearish triggers (signal the cycle is turning):

  1. Memory companies report HBM inventory build (not sold out)
  2. NVIDIA GPU shipment delays or order cancellations
  3. Hyperscaler capex guidance cuts (watch Google, Microsoft, Amazon, Meta quarterly earnings)
  4. HBM contract prices decline >10% QoQ
  5. Samsung or Micron begin aggressive pricing to gain share (rational oligopoly breaks down)
  6. CoWoS bottleneck resolves faster than expected (TSMC outsourcing succeeds at scale)

The single most important leading indicator: Listen to SK Hynix earnings calls for any change in the language around "sold out." As long as they say HBM capacity is "sold out" for the next 12+ months, the cycle is intact. The moment they say something like "we have good visibility" instead of "sold out," pricing pressure is approaching.

6.4 The Non-Consensus View

Most analysts are focused on "when does HBM supply catch demand" as a linear projection. Here is why that framing may be wrong:

  1. HBM4 resets the clock. Every technology transition (HBM3E --> HBM4 --> HBM4E) requires re-qualification, new packaging, and endures early yield losses. This is NOT like commoditized DRAM where bit supply scales smoothly. Each new generation creates a temporary supply disruption.
  2. The demand curve is exponential, not linear. AI training compute is growing 4-5x annually. Inference is growing even faster as models deploy. Every new AI model (GPT-5, Gemini Ultra, Claude 5) is larger and requires more memory. Demand forecasts based on current model sizes will underestimate future demand.
  3. SK Hynix's Chairman saying "wafer shortage until 2030" is not just talking his book. He is the person with the best demand visibility in the industry (direct line to NVIDIA's order book). If NVIDIA's internal GPU shipment plans for 2028-2030 require more HBM than the industry can produce, then the shortage truly could persist.
  4. The conventional DRAM squeeze creates a price floor. Even if HBM-specific ASPs decline, the overall memory industry revenue is supported by rising conventional DRAM prices. This prevents the profit collapse that normally characterizes memory downturns.

6.5 Summary: Position Sizing Guidance

TimeframeThesisPosition
Now - End 2026HBM supercycle intact, shortage continues, earnings will beatFull conviction position in SK Hynix / Micron
2027Transition year. HBM3E may see modest price pressure. HBM4 ramp creates new opportunity. Conventional DRAM supports.Maintain position, watch for triggers
2028+High uncertainty. Depends on AI demand trajectory. If AGI development accelerates (which is the operating assumption), memory demand is insatiable.Monitor; trim if triggers fire

Given the AGI premise: memory demand only goes up. Recursive self-improvement means ever-larger models, ever-more inference, ever-more GPUs, ever-more HBM. The memory companies are building capacity for a world where AI compute demand grows 10x in 5 years. If anything, they are UNDER-building. The risk is not a demand shortfall -- it is that they cannot build capacity fast enough.


Appendix: Key Data Points by Source

Data PointValueSource
SK Hynix Q1 2026 operating margin~72%SK Hynix Q1 2026 earnings
SK Hynix HBM market share (mid-2025)~62%Industry reports
SK Hynix Yongin investmentKRW 21.6T (~$16B)SK Hynix board approval, Feb 2026
Samsung 2026 HBM capacity increase~50% YoYTrendForce, Dec 2025
Samsung HBM3E 12-hi NVIDIA qualificationSeptember 2025 (after 18-month delay)TrendForce, Sep 2025
Samsung Q1 2026 profit surge~700-800% YoY (semiconductor division)Samsung Q1 2026 earnings
Micron new Hiroshima fab$9.6B investmentTrendForce, Dec 2025
Micron HBM market share target24%+ by end of 2026TrendForce, Jun 2025
Micron qualified clients4 major GPU/ASIC clientsMicron earnings
HBM as % of DRAM wafers (2026)~22-25%TrendForce, industry estimates
ASIC HBM demand growth (2026)+80% YoYTrendForce, Jul 2025
SK Group Chairman wafer shortage forecastUntil 2030GTC 2026, Mar 2026
HBM3E 8-hi price per stack~$300-350Industry estimates
HBM3E 12-hi price per stack~$500-600Industry estimates
NVIDIA B300 HBM requirement8x HBM3E 12-hi (288GB)NVIDIA specs
AMD MI400 HBM requirementUp to 432GB HBM4AMD announcements
TSMC CoWoS capacity (2026)~45,000-55,000 wafers/month targetTSMC disclosures

This analysis reflects data available as of April 23, 2026. The HBM market is evolving rapidly. Key earnings to watch: SK Hynix (quarterly), Samsung (quarterly), Micron (quarterly on different fiscal calendar), NVIDIA (for demand signals), and hyperscaler capex disclosures (Google, Microsoft, Amazon, Meta).