Micron Sells Out Entire HBM3E Supply for 2024, Most of 2025
by Anton Shilov on March 22, 2024 11:00 AM ESTBeing the first company to ship HBM3E memory has its perks for Micron, as the company has revealed that is has managed to sell out the entire supply of its advanced high-bandwidth memory for 2024, while most of their 2025 production has been allocated, as well. Micron's HBM3E memory (or how Micron alternatively calls it, HBM3 Gen2) was one of the first to be qualified for NVIDIA's updated H200/GH200 accelerators, so it looks like the DRAM maker will be a key supplier to the green company.
"Our HBM is sold out for calendar 2024, and the overwhelming majority of our 2025 supply has already been allocated," said Sanjay Mehrotra, chief executive of Micron, in prepared remarks for the company's earnings call this week. "We continue to expect HBM bit share equivalent to our overall DRAM bit share sometime in calendar 2025."
Micron's first HBM3E product is an 8-Hi 24 GB stack with a 1024-bit interface, 9.2 GT/s data transfer rate, and a total bandwidth of 1.2 TB/s. NVIDIA's H200 accelerator for artificial intelligence and high-performance computing will use six of these cubes, providing a total of 141 GB of accessible high-bandwidth memory.
"We are on track to generate several hundred million dollars of revenue from HBM in fiscal 2024 and expect HBM revenues to be accretive to our DRAM and overall gross margins starting in the fiscal third quarter," said Mehrotra.
The company has also began sampling its 12-Hi 36 GB stacks that offer a 50% more capacity. These KGSDs will ramp in 2025 and will be used for next generations of AI products. Meanwhile, it does not look like NVIDIA's B100 and B200 are going to use 36 GB HBM3E stacks, at least initially.
Demand for artificial intelligence servers set records last year, and it looks like it is going to remain high this year as well. Some analysts believe that NVIDIA's A100 and H100 processors (as well as their various derivatives) commanded as much as 80% of the entire AI processor market in 2023. And while this year NVIDIA will face tougher competition from AMD, AWS, D-Matrix, Intel, Tenstorrent, and other companies on the inference front, it looks like NVIDIA's H200 will still be the processor of choice for AI training, especially for big players like Meta and Microsoft, who already run fleets consisting of hundreds of thousands of NVIDIA accelerators. With that in mind, being a primary supplier of HBM3E for NVIDIA's H200 is a big deal for Micron as it enables it to finally capture a sizeable chunk of the HBM market, which is currently dominated by SK Hynix and Samsung, and where Micron controlled only about 10% as of last year.
Meanwhile, since every DRAM device inside an HBM stack has a wide interface, it is physically bigger than regular DDR4 or DDR5 ICs. As a result, the ramp of HBM3E memory will affect bit supply of commodity DRAMs from Micron, the company said.
"The ramp of HBM production will constrain supply growth in non-HBM products," Mehrotra said. "Industrywide, HBM3E consumes approximately three times the wafer supply as DDR5 to produce a given number of bits in the same technology node."
Source: Micron
27 Comments
View All Comments
The Hardcard - Monday, March 25, 2024 - link
They won’t have 80 percent of 2024 production when year end results are released in early 2025. ReplySamus - Thursday, March 28, 2024 - link
SKHynix hasn't had 50% HBM marketshare in any fiscal year, ever. They have split majority marketshare with Samsung, with Micron in third - until now. Micron at the moment has 100% marketshare because they are the only one that has shipped anything. While that will change, at the moment, SKHynix and Samsung have 0% of the HBM3e market because they are still in production. ReplyTerry_Craig - Saturday, March 23, 2024 - link
https://www.reuters.com/technology/samsung-use-chi... Replyballsystemlord - Friday, March 22, 2024 - link
It's not the A100/H100 that this HBM is being sold for, it's for a high end GPU for us... Yes, that's what it is... And this soon to be high end GPU is going to be sold at a reasonable price... Yes, that's what's going to happen... I can see it all so clearly now... ReplyOxford Guy - Sunday, March 24, 2024 - link
And there will be good games instead of the same warmed-over cartoonish stuff. ReplyBruzzone - Monday, March 25, 2024 - link
Samsung produced 300 M to 500 M Ampere on channel supply data. Seems to have been a cost : margin agreement with a capped Nvidia supply commitment, Samsung was selling direct to AIBs? I think so.Also suggests there were Ampere risk/ramp costs to overcome reported by many although I never fully bought into that on the volume of the full production run to achieve Samsungs margin objective.
From Ada mobile dGPU design generation to Blackwell desktop design generation seems reasonable moving back to wide bus + VRAM advantage on a depreciated 4 nm process trades off dGPU cost with board, trace and VRAM cost.
I believe BW RTX consumer will be more reasonably priced then many speculate. The reason is to appease game land and more important to encourage Ampere to Blackwell upgrade for which there is a hidden Nvidia agenda.
The agenda is a world market 'bargain' priced RT/Nvidia/basic ML/AI development platform; entry level on the CPUI instruction sets limits and Ampere features, serviced by SI integration relying on 300 M used Ampere attached to 300 M plus used Xeon v3/v4/Skylake. That development slingshot on my supply data in relation Nvidia applications segment stronghold position is actual the big BW priced right win, driving world market development onto Ampere attached to capable enough Xeon at a world market price. mb Reply
Diogene7 - Monday, March 25, 2024 - link
I am wondering if given some more years (somewhere between 2026 and 2032), this steap increase in HBM volume manufacturing could help decrease enough of the HBM price premium versus GDDR memory, or even LPDDR memory to make it possible to be a premium option for the consumer market ?It would then be possible to see some premium computing products (ex: Apple Mac Pro, Apple Studio, Apple Macbook Pro,…) have the option for 32GB / 64GB / 128GB HBM memory, and maybe a few years later on Macbook Air / iPad / iPhone as HBM memory seems more power efficient per bit than some other memory types…
Even better would be to have Non-Volatile-Memory (NVM) like SOT-MRAM memory dies be used to create Non-Volatile HBM stack : it would enable so many new opportunities !!! (Ex: a 32GB stack of low latency, high bandwith Non-Volatile-Memory / storage). Reply