https://www.rssboard.org/rss-specification AnandTech This channel features the latest computer hardware related articles. https://www.anandtech.com en-us Copyright 2024 AnandTech https://www.anandtech.com/content/images/rss_logo.png AnandTech https://www.anandtech.com Not Dead Yet: WD Releases New 6TB 2.5-Inch External Hard Drives - First Upgrade in Seven Years Anton Shilov

The vast majority of laptops nowadays use solid-state drives, which is why the development of new, higher-capacity 2.5-inch hard drives has all but come to a halt. Or rather, it almost has. It seems that the 2.5-inch form factor has a bit more life left in it after all, as today Western Digital has released a slate of new external storage products based on a new, high-capacity 6 TB 2.5-inch hard drive.

WD's new 6 TB spinner is being used to offer upgraded versions of the company's My Passport, Black P10, and and G-DRIVE ArmorATD portable storage products. Notably, however, WD isn't selling the bare 2.5-inch drive on a standalone basis – at least not yet – so for the time being it's entirely reserved for use in external storage.

Consequently, WD isn't publishing much about the 6 TB hard drive itself. The maximum read speed for these products is listed at 130 MB/sec – the same as WD's existing externals – and write performance goes unmentioned.

Notably, all of these 6 TB devices are thicker than their existing 5 TB counterparts, which strongly suggests that WD has increased their storage capacity not by improving their areal density, but by adding another platter to their existing drive platform. This, in turn, would help to explain why these new drives are being used in external storage products, as WD's 5 TB 2.5-inch drives are already 15mm thick, which is the highest standard thickness for a 2.5-inch form-factor, and already incompatible with a decent number of portable devices. External drives, in turn, are the only place these even thicker 2.5-inch drives would fit.

WD's specifications also gloss over whether these drives are based on shingled magnetic recording (SMR) technology. The company was already using SMR for their 5 TB drives in order to hit the necessary storage density there, so it seems very likely that they're continuing to use SMR for their 6 TB drives. Which is likely why the company isn't publishing write performance specifications for the drives, as we've seen SMR drives bottom out as low as 10 MB/second in our testing when the drive needs to rewrite data.

Depending on the specific drive model, all of the external storage drives use either a USB-C connector, or the very quaint USB Micro-B 3.0 connector. Though regardless of the physical connector used, all of the drives feature a USB 3.2 Gen 1 (5Gbps) interface, which is more than ample given the drives' physically-limited transfer speeds.

Wrapping things up, according to WD the new drives are available at retail immediately. The WD My Passport Ultra and WD My Passport Ultra for Mac with USB-C both retail for $199.99; the WD My Passport and WD My Passport for Mac are $179.99; the WD My Passport Works With USB-C is $184.99; the gaming-focused WD_Black P10 Game Drive sells for $184.99, and the SanDisk Professional G-Drive ArmorATD is $229.99. All of Western Digital's external storage drives are backed with a three-year limited warranty.

]]>
https://www.anandtech.com/show/21400/not-dead-yet-wd-intros-6tb-25-inch-external-hard-drives Thu, 16 May 2024 19:00:00 EDT tag:www.anandtech.com,21400:news
TSMC to Expand Specialty Capacity by 50%, Introduce 4nm N4e Low-Power Node Anton Shilov

With all the new fabs being built in Germany and Japan, as well as the expansion of production capacity in China, TSMC is planning to extend its production capacity for specialty technologies by 50% by 2027. As disclosed by the company during its European Technology Symposium this week, TSMC expects to need to not only convert existing capacity to meet demands for specialty processes, but even build new (greenfield) fab space just for this purpose. One of the big drivers for this demand, in turn, will be TSMC's next specialty node: N4e, a 4nm-class ultra-low-power production node.

"In the past, we always did the review phase [for upcoming fabs], but for the first time in a long time at TSMC, we started building greenfield fab that will address the future specialty technology requirements," said Dr. Kevin Zhang, Senior Vice President, Business Development and Overseas Operations Office, at the event. "In the next four to five years, we actually going to grow our specialty capacity by up to 1.5x. In doing so we actually expanding the footprint of our manufacturing network to improve the resiliency of the overall fab supply chain."

On top of its well-known major logic nodes like N5 and N3E, TSMC also offers a suite of specialty nodes for applications such as power semiconductors, mixed analog I/O, and ultra-low-power applications (e.g. IoT). These are typically based on the company's trailing manufacturing processes, but regardless of the underlying technology, the capacity demand for these nodes is growing right alongside the demand for TSMC's major logic nodes. All of which has required TSMC to reevaluate how they go about planning for capacity on their specialty nodes.

TSMC's expansion strategy in the recent years has pursued several goals. One of them has been to build new fabs outside of Taiwan; another has been to generally expand production capacity to meet future demand for all types of process technologies – which is why the company is building up capacity for specialty nodes.

At present, TSMC's most advanced specialty node is N6e, an N7/N6 variant that supports operating voltages between 0.4V and 0.9V. With N4e, TSMC is looking at voltages below 0.4V. Though for now, TSMC is not disclosing much in the way of technical details for the planned node; given the company's history here, we expect they'll have more to talk about next year once the new process is ready.

]]>
https://www.anandtech.com/show/21397/tsmc-to-expand-specialty-capacity-by-50-introduce-4nm-lowpower-node Thu, 16 May 2024 17:00:00 EDT tag:www.anandtech.com,21397:news
The Arctic Cooling Freezer 36 ARGB CPU Cooler Review: Budget Cooling Done Well E. Fylladitakis As modern high-performance CPUs generate more heat, there's been a noticeable increase in the demand for powerful air coolers capable of managing these thermal challenges. Traditional stock air coolers, while sufficient for regular use, are typically designed to be cheap and relatively compact, leaving further improvements to noise control and peak cooling efficiency on the table. This gap has long prompted advanced users and system builders to opt for high-quality aftermarket coolers that designed to better handle the heat output from top-tier processors.

Known for their innovative approach to PC hardware, Arctic Cooling has stepped into this competitive market with a product aimed at delivering effective cooling at a very low retail price. The Freezer 36 A-RGB, a dual fan tower cooler, is designed to support the cooling demands of the latest CPUs while also offering customizable RGB lighting for visual flair. This review will explore the features, performance, and value of the Arctic Cooling Freezer 36 A-RGB, comparing it with other leading products in the market to see how it stacks up in providing efficient and effective cooling for modern CPUs.

]]>
https://www.anandtech.com/show/21360/the-arctic-cooling-freezer-36-argb-cpu-cooler-review Thu, 16 May 2024 09:00:00 EDT tag:www.anandtech.com,21360:news
TSMC Readies Next-Gen HBM4 Base Dies, Built on 12nm and 5nm Nodes Anton Shilov

Of the several major changes coming with HBM4 memory, one of the most immediate is the sheer width of the memory interface. With the fourth-generation memory standard moving from an already wide 1024-bit interface to a ultra-wide 2048-bit interface, HBM4 memory stacks won't be business as usual; chip manufacturers are going to need to adopt more advanced packaging methods than are used today to accommodate the wider memory.

As part of its European Technology Symposium 2024 presentation, TSMC offered some fresh details into the base dies it will be manufacturing for HBM4, which will be built using logic processes. With TSMC planning to employ variations of their N12 and N5 processes for this task, the company is expecting to occupy a favorable place in the HBM4 manufacturing process, as memory fabs are not currently equipped to economically produce such advanced logic dies – if they can produce them at all.

For the first wave of HBM4, TSMC is preparing to use two fabrication processes: N12FFC+ and N5. While they serve the same purpose — integrating HBM4E memory with next-generation AI and HPC processors — they are going to be used in two different ways to connect memory for high-performance processors for AI and HPC applications.

"We are working with key HBM memory partners (Micron, Samsung, SK Hynix) over advanced nodes for HBM4 full stack integration," said Senior Director of Design and Technology Platform at TSMC. "N12FFC+ cost effective base die can reach HBM for performance and N5 base die can provide even more logic with much lower power at HBM4 speeds."

TSMC Logic for HBM4 Base Die
  N12FFC+ N5
Area 1X 0.39X
Logic GHz @ power 1X 1.55X
Power @ GHz 1X 0.35X

TSMC's base die made on N12FFC+ fabrication process (12nm FinFet Compact Plus, which formally belongs to a 12nm-class technology, but it lays its roots from TSMC's well-proven 16nm FinFET production node) will be used to install HBM4 memory stacks on a silicon interposer next to system-on-chips (SoCs). TSMC believes that their 12FFC+ process is well-suited to achieve HBM4 performance, enabling memory vendors to build 12-Hi(48 GB) and 16-Hi stacks (64 GB), with per-stack bandwidth well as over 2 TB/second. 

"We are also optimizing CoWoS-L and CoWoS-R for HBM4," the Senior Director said. "Both CoWoS-L and CoWoS-R [use] over eight layers to enable HBM4's routing of over 2,000 interconnects with [proper] signal integrity."

HBM4 base dies on N12FFC+ will be instrumental in building system-in-packages (SiPs) using TSMC's CoWoS-L or CoWoS-R advanced packaging technology, which offer interposers up to 8x reticle size – enough space for up to 12 HBM4 memory stacks. At present, HBM4 can achieve data transfer rates of 6 GT/s at currents of 14mA, according to TSMC figures.

"We collaborate with EDA partners like Cadence, Synopsys, and Ansys to certify HBM4 channel signal integrity, IR/EM, and thermal accuracy," the TSMC representative explained.

Meanwhile, as an even more advanced alternative, memory manufacturers will also have the option of tapping TSMC's N5 process for their HBM4 base dies. N5-built base dies will pack even more logic, consume less power, and will offer even higher performance. But arguably the most important benefit is that such an advanced process technology will enable are very small interconnect pitches, on the order of 6 to 9 microns. This will allow N5 base dies to be used in conjunction with direct bonding, enabling HBM4 to be 3D stacked right on top of logic chips. Direct bonding stands to allow for even greater memory performance, which is expected to be a big boost for AI and HPC chips that are always scrounging for more memory bandwidth.

We already know that TSMC and SK Hynix collaborate on HBM4 base dies. It is likely that TSMC will also produce HBM4 base dies for Micron. Otherwise, we'd be more surprised to see TSMC working with Samsung, as that conglomerate already has its own advanced logic fabs via its Samsung Foundry unit.

]]>
https://www.anandtech.com/show/21395/tsmc-readies-hbm4-base-dies-at-12nm-and-5nm Thu, 16 May 2024 08:00:00 EDT tag:www.anandtech.com,21395:news
TSMC: Performance-Optimized 3nm N3P Process on Track for Mass Production This Year Anton Shilov

As part of the second leg of TSMC's spring technology symposium series, the company offered an update on the state of its 3nm-class processes, both current and future. Building on the back of their current-generation N3E process, the optical shrink of this process technology, N3P, is now on track to enter mass production in the second half of 2024. Thanks to that shrink, N3P is expected to offer both increased performance efficiency as well as increased transistor density over N3E.

N3E in Production, Yielding Well

With N3E already in volume production, TSMC is reporting that they're seeing "great" yields on the second-generation 3nm-class process note. According to the company, the D0 defect density of N3E is at relative parity with N5, matching the defect rate of the older node for the same point in its respective lifecycle. This is no small feat, given the additional complexities that come with developing one last, ever-finer generation of FinFET technology. So for TSMC's bleeding-edge customers such as Apple, who just launched their M4 SoC, this is allowing them to reap the benefits of the improved process node relatively quickly.

"N3E started volume production in the fourth quarter of last year, as planned," a TSMC executive said at the event. "We have seen great yield performance on customers' products, so they did go to market as planned."

TSMC's N3E node is a relaxed version of N3B, eliminating some EUV layers and completely avoiding the usage of EUV double patterning. This makes it a bit cheaper to produce, and in some cases it widens the process window and yields, though it comes at the cost of some transistor density.

N3P on Track For Second-Half 2024

Meanwhile, looking towards the immediate future at TSMC, N3P has finished qualification and its yield performance is close to N3E, according to the company. Being an optical shrink, the N3P node is set to enable processor developers to either increase performance by 4% at the same leakage or reduce power consumption by 9% at the same clocks (previously the range was between 4% ~ 10% depending on design). The new node is also set to boost transistor density by 4% for a 'mixed' chip design, which TSMC defines as a processor consisting of 50% logic, 30% SRAM, and 20% analog circuits.

Advertised PPA Improvements of New Process Technologies
Data announced during conference calls, events, press briefings and press releases
  TSMC
N3
vs
N5
N3E
vs
N5
N3P
vs
N3E
N3X
vs
N3P
Power -25-30% -32% -5% ~ 10% higher
Performance +10-15% +18% +5% +5%
Fmax @ 1.2V
Chip Density ? ? 1.04x same
SRAM Cell Size 0.0199µm² (-5% vs N5) 0.021µm² (same as N5) ? ?
Volume
Manufacturing
Late 2022 H2 2023 H2 2024 2025

While it looks like the original N3 (aka N3B) will have a relatively muted lifecycle since Apple has been its only major customer, N3E will be adopted by a wide range of TSMC's customers, which includes many of the industry's biggest chip designers. 

Since N3P is an optical shrink of N3E, it is compatible with its predecessor in terms of IP blocks, process rules, electronic design automation (EDA) tools, and design methodology. As a result, TSMC expects the majority of new tape outs to use N3P, not N3E or N3. This is logical as N3P provides higher performance efficiency than N3E at a lower cost than N3.

The most important aspect of N3P is that it is on track to be production ready in the second half of this year, so expect chip designers to adopt it straight away. 

"We have also successfully delivered N3P technology," the TSMC executive said. "It has passed qualification and yield performance is close to N3E. [The process technology] has also received product customer tape outs and will start on production in the second half of this year. Because of [PPA advantages] of N3P, we expect the majority of tape outs on N3 to go to N3P."

]]>
https://www.anandtech.com/show/21394/tsmc-performanceoptimized-3nm-process-technology-on-track-for-mass-production-this-year Wed, 15 May 2024 18:00:00 EDT tag:www.anandtech.com,21394:news
Supermicro E102-13R-H Review: A Raptor Lake-P 3.5-inch SBC System for Embedded Applications Ganesh T S Single-board computers in the 3.5-inch form-factor have become extremely popular for embedded applications involving a mix of high performance requirements as well as extended peripherals support. Typical use-case scenarios include digital signage, edge inferencing solutions, retail applications, and IoT gateways. The requirements in these segments call for processors and components that can operate in a wide temperature range. The chassis and cooling solution handle other duties such as ruggedness and avoidance of moving parts. The Supermicro X13SRN-H-WOHS is a 3.5-inch SBC with a soldered-down Intel Core i7-1370PE - a Raptor Lake-P embedded processor with vPro support. It has plenty of I/O support, including a SlimSAS PCIe expansion slot. Supermicro also offers a ready-to-deploy solution using the SBC in the actively-cooled SYS-E102-13R-H box PC. This review takes a detailed look at the features and performance profile of the SYS-E102-13R-H, along with an evaluation of the thermal solution.

]]>
https://www.anandtech.com/show/21384/supermicro-e10213re-review-a-raptor-lakep-35-sbc-system-for-embedded-applications Tue, 14 May 2024 08:00:00 EDT tag:www.anandtech.com,21384:news
Intel Core i9-14900KS Review: The Swan Song of Raptor Lake With A Super Fast 6.2 GHz Turbo Gavin Bonshor For numerous generations of their desktop processor releases, Intel has made available a selection of high-performance special edition "KS" CPUs that add a little extra compared to their flagship chip. With a lot of interest, primarily from the enthusiasts looking for the fastest processors, Intel's latest Core i9-14900KS represents a super-fast addition to its 14th Generation Core lineup with out-of-the-box turbo clock speeds of up to 6.2 GHz and represents the last processor to end an era as Intel is removing the 'i' from its legendary nomenclature for future desktop chip releases.

Reaching speeds of up to 6.2 GHz, this sets up the Core i9-14900KS as the fastest desktop CPU in the world right now, at least in terms of frequencies out of the box. Building on their 'regular' flagship chip, the Core i9-14900, the Core i9-14900KS is also using their refreshed Raptor Lake (RPL-R) 8P+16E core chip design with a 200 MHz higher boost clock speed and also has a 100 MHz bump on P-Core base frequency. 

This new KS series SKU shows Intel's drive to offer an even faster alternative to their desktop regular K series offerings, and with the Core i9-14900KS, they look to provide the best silicon from their Raptor Lake Refresh series with more performance available to unlock to those who can. The caveat is that achieving these ridiculously fast clock speeds of 6.2 GHz on the P-Core comes at the cost of power and heat; keeping a processor pulling upwards of 350 W is a challenge in its own right, and users need to factor this in if even contemplating a KS series SKU.

In our previous KS series review, the Core i9-13900KS reached 360 W at its peak, considerably more than the Core i9-13900K. The Core i9-14900KS, built on the same core architecture, is expected to surpass that even further than the Core i9-14900K. We aim to compare Intel's final Core i series processor to the best of what both Intel and AMD have available, and it will be interesting to see how much performance can be extrapolated from the KS compared to the regular K series SKU.

]]>
https://www.anandtech.com/show/21378/intel-core-i9-14900ks-review-the-swan-song-of-raptor-lake-with-a-super-fast-6-2-ghz-turbo Fri, 10 May 2024 10:30:00 EDT tag:www.anandtech.com,21378:news
AMD Hits Record High Share in x86 Desktops and Servers in Q1 2024 Anton Shilov

Coming out of the dark times that preceded the launch of AMD's Zen CPU architecture in 2017, to say that AMD has turned things around on the back of Zen would be an understatement.  Ever since AMD launched its first Zen-based Ryzen and EPYC processors for client and server computers, it has been consistently gaining x86 market share, growing from a bit player to a respectable rival to Intel (and all at Intel's expense).

The first quarter of this year was no exception, according to Mercury Research, as the company achieved record high unit shares on x86 desktop and x86 server CPU markets due to success of its Ryzen 8000-series client products and 4th Generation EPYC processors.

"Mercury noted in their first quarter report that AMD gained significant server and client revenue share driven by growing demand for 4th Gen EPYC and Ryzen 8000 series processors," a statement by AMD reads.

Desktop PCs: AMD Achieves Highest Share in More Than a Decade 

Desktops, particularly DIY desktops, have always been AMD's strongest market. After the company launched its Ryzen processors in 2017, it doubled its presence in desktops in just three years. But in the recent years the company had to prioritize production of more expensive CPUs for datacenters, which lead to some erosion of its desktop and mobile market shares.

As the company secured more capacity at TSMC, it started to gradually increase production of desktop processors. In Q4 last year it introduced its Zen 4-based Ryzen 8000/Ryzen 8000 Pro processors for mainstream desktops, which appeared to be pretty popular with PC makers.

As a result of this and other factors, AMD increased unit sales of its desktop CPUs by 4.7% year-over-year in Q1 2024 and its market share achieved 23.9%, which is the highest desktop CPU market share the company commanded in over a decade. Interestingly, AMD does not attribute its success on the desktop front to any particular product or product family, which implies that there are multiple factors at play.

Mobile PCs: A Slight Drop for AMD amid Intel's Meteor Lake Ramp

AMD has been gradually regaining its share inside laptops for about 1.5 years now and sales of its Zen 4-based Ryzen 7040-series processors were quite strong in Q3 2023 and Q4 2023, when the company's unit share increased to 19.5% and 20.3%, respectively, as AMD-based notebook platforms ramped up. By contrast, Intel's Core Ultra 'Meteor Lake' powered machines only began to hit retail shelves in Q4'23, which affected sales of its processors for laptops.

In the first quarter AMD's unit share on the market of CPUs for notebooks decreased to 19.3%, down 1% sequentially. Meanwhile, the company still demonstrated significant year-over-year unit share increase of 3.1% and revenue share increase of 4%, which signals rising average selling price of AMD's latest Ryzen processors for mobile PCs.

Client PCs: Slight Gain for AMD, Small Loss for Intel

Overall, Intel remained the dominant force in client PC sales in the first quarter of 2024, with a 79.4% market share, leaving 20.6% for AMD. This is not particularly surprising given how strong and diverse Intel's client products lineup is. Even with continued success, it will take AMD years to grow sales by enough to completely flip the market.

But AMD actually gained a 0.3% unit share sequentially and a 3.6% unit share year-over year. Notably, however, AMD's revenue share of client PC market is significantly lower than its unit share (16.3% vs 20.6%), so the company is still somewhat pigeonholed into selling more budget and fewer premium processors overall. But the company still made a strong 3.8% gain since the first quarter 2023, when its revenue share was around 12.5% amid unit share of 17%.

Servers: AMD Grabs Another Piece of the Market

AMD's EPYC datacenter processors are undeniably the crown jewel of the company's CPU product lineup. While AMD's market share in desktops and laptops fluctuated in the recent years, the company has been steadily gaining presence in servers both in terms of units and in terms of revenue in the highly lucrative (and profitable) server market.

In Q1 2024, AMD's unit share on the market of CPUs for servers increased to 23.6%, a 0.5% gain sequentially and a massive 5% gain year-over-year driven by the ramp of platforms based on AMD's 4th Generation EPYC processors. With a 76.4% unit market share, Intel continues to dominate in servers, but it is evident that AMD is getting stronger.

AMD's revenue share of the x86 server market reached 33%, up 5.2% year-over-year and 1.2% from the previous quarter. This signals that the company is gaining traction in expensive machines with advanced CPUs. Keeping in mind that for now Intel does not have direct rivals for AMD's 96-core and 128-core processors, it is no wonder that AMD has done so well growing their share of the server market.

"As we noted during our first quarter earnings call, server CPU sales increased YoY driven by growth in enterprise adoption and expanded cloud deployments," AMD said in a statement.

]]>
https://www.anandtech.com/show/21392/amd-hits-record-high-share-in-x86-cpus-in-q1-2024 Fri, 10 May 2024 07:00:00 EDT tag:www.anandtech.com,21392:news
Sabrent Launches Rocket Nano M.2-2242 SSD: Up to 5 GB/sec Anton Shilov

Sabrent tends to get into news when it launches ultra-high-performance SSDs for enthusiast-grade desktops, but this week the company introduced a completely different type of product: a small form-factor M.2-2242 SSD aimed at Lenovo's Legion Go handheld and ultra-thin laptops that don't accomodate M.2-2280 drives. And even though it's not an enthusiast-grade drive, the Rocket Nano still boasts with quite decent performance and capacity.

The Sabrent Rocket Nano 2242 (SB-2142) drive is based on the Phison E27T platform, a PCIe 4.0 x4 controller that is that is designed for mainstream DRAM-less SSDs, and in the case of the Rocket Nano, is paired with 3D TLC memory. The SSD is available in a single 1TB configuration, and is rated for read speeds up to 5 GB/s. Interestingly, the Phison E27T controller itself is rated for read speeds up to 7 GB/s, so it appears that the petite Rocket Nano isn't making full use of the controller's performance.

Sabrent positions its Rocket Nano 2242 SSD as drives for upgrading Lenovo's Legion Go portable game console, select Lenovo ThinkPad laptops, and other M.2-2242-sized PCs that can't accomodate larger 2280 drives. Keeping in mind that most devices shipping with M.2-2242 SSDscome with pretty slow stock drives, Sabrent solution seems to be a viable product for such upgrades. All the while, Sabrent's Rocket Nano 2242 will also work in systems with a PCIe 3.0 x4 M.2 slots, so the market for these drives is pretty wide.

Sabrent's Rocket Nano 2242 SSD 1 TB (SB-2142-1TB) SSD has a recommended price of $99.99, which is more or less in line with other 1 TB drives in the same form-factor and offering comparable performance. The SSD is currently available at Amazon for $101.

Sources: Tom's Hardware, Sabrent

]]>
https://www.anandtech.com/show/21391/sabrent-launches-rocket-nano-m2-2242-ssd-up-to-5-gbs Thu, 09 May 2024 08:00:00 EDT tag:www.anandtech.com,21391:news
Micron Ships Crucial-Branded LPCAMM2 Memory Modules: 64GB of LPDDR5X For $330 Anton Shilov

As LPCAMM2 adoption begins, the first retail memory modules are finally starting to hit the retail market, courtesy of Micron. The memory manufacturer has begun selling their LPDDR5X-based LPCAMM2 memory modules under their in-house Crucial brand, making them available on the latter's storefront. Timed to coincide with the release of Lenovo's ThinkPad P1 Gen 7 laptop – the first retail laptop designed to use the memory modules – this marks the de facto start of the eagerly-awaited modular LPDDR5X memory era.

Micron's Low Power Compression Attached Memory Module 2 (LPCAMM2) modules are available in capacities of 32 GB and 64 GB. These are dual-channel modules that feature a 128-bit wide interface, and are based around LPDDR5X memory running at data rates up to 7500 MT/s. This gives a single LPCAMM2 a peak bandwidth of 120 GB/s. Micron is not disclosing the latencies of its LPCAMM2 memory modules, but it says that high data transfer rates of LPDDR5X compensate for the extended timings.

Micron says that LPDDR5X memory offers significantly lower power consumption, with active power per 64-bit bus being 43-58% lower than DDR5 at the same speed, and standby power up to 80% lower. Meanwhile, similar to DDR5 modules, LPCAMM2 modules include a power management IC and voltage regulating circuitry, which provides module manufacturers additional opportunities to reduce power consumption of their products.


Source: Micron LPDDR5X LPCAMM2 Technical Brief

It's worth noting, however, that at least for the first generation of LPCAMM2 modules, system vendors will need to pick between modularity and performance. While soldered-down LPDDR5X memory is available at speeds up to 8533 MT/sec – and with 9600 MT/sec on the horizon – the fastest LPCAMM2 modules planned for this year by both Micron and rival Samsung will be running at 7500 MT/sec. So vendors will have to choose between the flexibility of offering modular LPDDR5X, or the higher bandwidth (and space savings) offered by soldering down their memory.

Micron, for its part, is projecting that 9600 MT/sec LPCAMM2 modules will be available by 2026. Though it's all but certain that faster memory will also be avaialble in the same timeframe.

Micron's Crucial LPDDR5X 32 GB module costs $174.99, whereas a 64 GB module costs $329.99.

]]>
https://www.anandtech.com/show/21390/micron-ships-crucialbranded-lpcamm2-memory-modules Wed, 08 May 2024 18:30:00 EDT tag:www.anandtech.com,21390:news
Intel Issues Official Statement Regarding 14th and 13th Gen Instability, Recommends Intel Default Settings Gavin Bonshor

Further to our last piece which we detailed Intel's issue to motherboard vendors to follow with stock power settings for Intel's 14th and 13th Gen Core series processors, Intel has now issued a follow-up statement to this. Over the last week or so, motherboard vendors quickly released firmware updates with a new profile called 'Intel Baseline', which motherboard vendors assumed would address the instability issues. 

As it turns out, Intel doesn't seem to accept this as technically, these Intel Baseline profiles are not to be confused with Intel's default specifications. This means that Intel's Baseline profiles seemingly give the impression that they are operating at default settings, hence the terminology 'baseline' used, but this still opens motherboard vendors to use their interpretations of MCE or Multi-Core Enhancement.

To clarify things for consumers, Intel has sent us the following statement:

Several motherboard manufacturers have released BIOS profiles labeled ‘Intel Baseline Profile’. However, these BIOS profiles are not the same as the 'Intel Default Settings' recommendations that Intel has recently shared with its partners regarding the instability issues reported on 13th and 14th gen K SKU processors.

These ‘Intel Baseline Profile’ BIOS settings appear to be based on power delivery guidance previously provided by Intel to manufacturers describing the various power delivery options for 13th and 14th Generation K SKU processors based on motherboard capabilities.

Intel is not recommending motherboard manufacturers to use ‘baseline’ power delivery settings on boards capable of higher values.

Intel’s recommended ‘Intel Default Settings’ are a combination of thermal and power delivery features along with a selection of possible power delivery profiles based on motherboard capabilities.

Intel recommends customers to implement the highest power delivery profile compatible with each individual motherboard design as noted in the table below:


Click to Enlarge Intel's Default Settings

What Intel's statement is effectively saying to consumers, is that users shouldn't be using the Baseline Power Delivery profiles which are offered by motherboard vendors through a plethora of firmware updates. Instead, Intel is recommending users opt for Intel Default Settings, which follows what the specific processor is rated for by Intel out of the box to achieve the clock speeds advertised, without users having to worry about firmware 'over' optimization which can cause instability as there have been many reports of happening.

Not only this, but the Intel Default settings offer a combination of thermal specifications and power capabilities, including voltage and frequency curve settings that apply to the capability of the motherboard used, and the power delivery equipped on the motherboard. At least for the most part, Intel is recommending users with 14th and 13th-Gen Core series K, KF, and KS SKUs that they do not recommend users opt in using the Baseline profiles offered by motherboard vendors.

Digesting the contrast between the two statements, the key differential is that Intel's priority is reducing the current going through the processor, which for both the 14th and 13th Gen Core series processors is a maximum of 400 A, even when using the Extreme profile. We know those motherboard vendors on their Z790 and Z690 motherboards opt for an unrestricted power profile, which is essentially 'unlimited' power and current to maximize performance at the cost of power consumption and heat, which does exacerbate problems and can lead to frequent bouts of instability, especially on high-intensity workloads.

Another variable Intel is recommending is that the AC Load Line must match the design target of the processor, with a maximum value of 1.1 mOhm, and that the DC Load Line must be equal to the AC Load Line; not above or below this recommendation for maximum stability. Intel also recommends that CEP, eTVB, C-states, TVB, and TVB Voltage Optimizations be active on the Extreme profile to ensure stability and performance are consistent.

Given Intel is essentially recommending users not to use what motherboard vendors are offering to fix, we agree that when motherboards come out of the box, they should operate at 'Default' settings until asked otherwise. We understand that motherboard vendors have the desire to showcase what they can do with their wares, features, and firmware, but ultimately there is some real lack of communication between Intel and its partners regarding this issue.

Following Intel's statement, they do recommend customers implement the highest power delivery profile which is compatible with the caliber of motherboard used by following the specification and design. According to Intel, this isn't open for interpretation despite what motherboard vendors have offered so far, and we do expect that there is likely to be more to come in this saga of constant developments regarding the instability issue.

]]>
https://www.anandtech.com/show/21389/intel-issues-official-statement-regarding-14th-and-13th-gen-instability-recommends-intel-default-settings Wed, 08 May 2024 10:05:00 EDT tag:www.anandtech.com,21389:news
ASUS to Unveil First Qualcomm Snapdragon X Elite-Based Laptop On May 20th Anton Shilov

Asus on Tuesday said that it would announce its first 'AI PC' based on Qualcomm's Snapdragon X Elite system-on-chips later this month. The new laptop is set to be introduced at the Next Level. AI Incredible virtual launch event on May 20.

The launch of Asustek's new Vivobook S 15 will be hosted by Asus and will be joined by representatives of Qualcomm and Microsoft, who will reveal how they collaborated with PC maker to develop the first notebook based on Qualcomm's Snapdragon X Elite processors. These new SoCs promise to have a significant impact on the PC market in the coming quarters as they are based on the Arm instruction set architecture and are expected to bring together high performance, on-device AI acceleration, and long battery life. 

Qualcomm itself calls systems powered by its Snapdragon processors as AI PCs, which is exactly how Asus calls it Vivobook S15 as well. Meanwhile, the only things we know about the machine for now is that it will be based on Qualcomm's Snapdragon X Elite or Snapdragon X Plus processors with 12 or 10 Oryon CPU cores (originally developed by Nuvia), a high-end Adreno GPU, and a 45 TOPS NPU; will come in a metallic chassis, and will feature a 15-inch display.

"The launch event, which will feature a collaboration between Microsoft, Qualcomm, and Asus, celebrates the first of the new-era Asus AI PCs, which are set to redefine the very fabric of computing," a statement by Asus reads. "The new laptop will usher in a new era of Asus AI PCs, breaking traditional boundaries and harnessing advanced AI capabilities. With comprehensive support for the latest AI functionality from Asus and Microsoft, it offers personalized AI experiences tailored to individual requirements."

Asus is also scheduled showcase its Vivobook laptops based on Qualcomm's processors at Computex in June. Actual systems will be available later this year.

]]>
https://www.anandtech.com/show/21388/asus-to-unveil-qualcomm-snapdragon-x-elite-based-ai-pc-laptop- Tue, 07 May 2024 15:30:00 EDT tag:www.anandtech.com,21388:news
Upcoming AMD Ryzen AI 9 HX 170 Processor Leaked By ASUS? Gavin Bonshor

In what appears to be a mistake or a jump of the gun by ASUS, they have seemingly published a list of specifications for one of its key notebooks that all but allude to the next generation of AMD's mobile processors. While we saw AMD toy with a new nomenclature for their Phoenix silicon (Ryzen 7040 series), it seems as though AMD is once again changing things around where their naming scheme for processors is concerned.

The ASUS listing, which has now since been deleted, but as of writing is still available through Google's cache, highlights a model that is already in existence, the VivoBook S 16 OLED (M5606), but is listed with an unknown AMD Ryzen AI 9 HX 170 processor. Which, based on its specificiations, is certainly not part of the current Hawk Point (Phoenix/Phoenix 2) platform.


The cache on Google shows the ASUS Vivobook S 16 OLED with a Ryzen AI 9 HX 170 Processor

While it does happen in this industry occasionally, what looks like an accidental leak by ASUS on one of their product pages has unearthed an unknown processor from AMD. This first came to our attention via a post on Twitter by user @harukaze5719. While we don't speculate on rumors, we confirmed this ourselves by digging through Google's cache. Sure enough, as the image above from Google highlights, it lists a newly unannounced model of Ryzen mobile processor. Under the listing via the product compare section for the ASUS Vivobook S 16 OLED (M5606) notebook, it is listed with the AMD Ryzen AI 9 HX 170, which appears to be one of AMD's upcoming Zen 5-based mobile chips codenamed Strix Point.

So with the seemingly new nomenclature that AMD has gone with, it has a clear focus on AI, or rather Ryzen AI, by including it in the name. The Ryzen AI 9 HX 170 looks set to be a 12C/24T Zen 5 mobile variant, with their Ryzen AI NPU or similar integrated within the chip. Given that Microsoft has defined that only processors with an NPU with 45 TOPS of performance or over constitute being considered an 'AI PC', it's likely the Xilinx (now AMD Xilinx) based NPU will meet these requirements as the listing states the chip has up to 77 TOPS of AI performance available. The HX series is strikingly similar to AMD's (and Intel's) previous HX naming series for their desktop replacement SKUs for laptops, so assuming any of the details of ASUS's error are correct, then this is presumably a very high-end, high-TDP part.


AMD Laptop Roadmap from Zen 2 in 2019 to Zen 5 on track for release in 2024

We've known for some time that AMD plans to release AMD's Zen 5-based Strix Point line-up sometime in 2024. Given the timing of Computex 2024, which is just over four weeks away, we still don't quite have the full picture of Zen 5's performance and its architectural shift over Zen 4. AMD CEO Dr. Lisa Su also confirmed that Zen 5 will come with enhanced RDNA graphics within the Strix Point SoC by stating "Strix combines our next-gen Zen 5 core with enhanced RDNA graphics and an updated Ryzen AI engine to significantly increase the performance, energy efficiency, and AI capabilities of PCs,"

While it's entirely possible as we lead up to Computex 2024 that AMD is prepared to announce more details about Zen 5, nothing is confirmed. We do know that the CEO of AMD, Dr. Lisa Su is scheduled to deliver the opening keynote of the show, Dr. Lisa Su unveiled their Zen 4 microarchitecture at Computex 2022 during AMD's keynote and even unveiled their 3D V-Cache stacking, which we know today as the Ryzen X3D CPUs back at Computex 2021.

With that in mind, AMD and Dr. Lisa Su love to announce new products and architectures at Computex, so we just have to wait until the beginning of next month. How AMD denotes the nomenclature for the upcoming Zen 5 mobile and desktop processors remains to be seen, but hopefully, all will be revealed soon. Regarding the ASUS Vivobook S 16 OLED (M5606), we currently don't know any of the other specifications at this time. Still, we expect them to be available once AMD has updated us with information on Zen 5 and Strix Point.

]]>
https://www.anandtech.com/show/21386/upcoming-amd-ryzen-ai-9-hx-170-processor-leaked-by-asus- Tue, 07 May 2024 14:25:00 EDT tag:www.anandtech.com,21386:news
Apple Announces M4 SoC: Latest and Greatest Starts on 2024 iPad Pro Ryan Smith Setting things up for what is certainly to be an exciting next few months in the world of CPUs and SoCs, Apple this morning has announced their next-generation M-series chip, the M4. Introduced just over six months after the M3 and the associated 2023 Apple MacBook family, the M4 is going to start its life on a very different track, launching alongside Apple’s newest iPad Pro tablets. With their newest chip, Apple is promising class-leading performance and power efficiency once again, with a particular focus on machine learning/AI performance.

The launch of the M4 comes as Apple’s compute product lines have become a bit bifurcated. On the Mac side of matters, all of the current-generation MacBooks are based on the M3 family of chips. On the other hand, the M3 never came to the iPad family – and seemingly never will. Instead, the most recent iPad Pro, launched in 2022, was an M2-based device, and the newly-launched iPad Air for the mid-range market is also using the M2. As a result, the M3 and M4 exist in their own little worlds, at least for the moment.

Given the rapid turn-around between the M3 and M4, we’ve not come out of Apple’s latest announcement expecting a ton of changes from one generation to the next. And indeed, details on the new M4 chip are somewhat limited out of the gate, especially as Apple publishes fewer details on the hardware in its iPads in general. Coupled with that is a focus on comparing like-for-like hardware – in this case, M4 iPads to M2 iPads – so information is thinner than I’d like to have. None the less, here’s the AnandTech rundown on what’s new with Apple’s latest M-series SoC.

Apple M-Series (Vanilla) SoCs
SoC M4 M3 M2
CPU Performance 4-core 4-core
16MB Shared L2
4-core (Avalanche)
16MB Shared L2
CPU Efficiency 6-core 4-core
4MB Shared L2
4-core (Blizzard)
4MB Shared L2
GPU 10-Core
Same Architecture as M3
10-Core
New Architecture - Mesh Shaders & Ray Tracing
10-Core
3.6 TFLOPS
Display Controller 2 Displays? 2 Displays 2 Displays
Neural Engine 16-Core
38 TOPS (INT8)
16-Core
18 TOPS (INT16)
16-Core
15.8 TOPS (INT16)
Memory
Controller
LPDDR5X-7500
8x 16-bit CH
120GB/sec Total Bandwidth (Unified)
LPDDR5-6250
8x 16-bit CH
100GB/sec Total Bandwidth (Unified)
LPDDR5-6250
8x 16-bit CH
100GB/sec Total Bandwidth (Unified)
Max Memory Capacity 24GB? 24GB 24GB
Encode/
Decode
8K
H.264, H.265, ProRes, ProRes RAW, AV1 (Decode)
8K
H.264, H.265, ProRes, ProRes RAW, AV1 (Decode)
8K
H.264, H.265, ProRes, ProRes RAW
USB USB4/Thunderbolt 3
? Ports
USB4/Thunderbolt 3
2x Ports
USB4/Thunderbolt 3
2x Ports
Transistors 28 Billion 25 Billion 20 Billion
Mfc. Process TSMC N3E TSMC N3B TSMC N5P

At a high level, the M4 features some kind of new CPU complex (more on that in a second), along with a GPU that seems to be largely lifted from the M3 – which itself was a new GPU architecture. Of particular focus by Apple is the neural engine (NPU), which is still a 16-core design, but now offers 38 TOPS of performance. And memory bandwidth has been increased by 20% as well, helping to keep the more powerful chip fed.

One of the few things we can infer with a high degree of certainty is the manufacturing process being used here. Apple’s description of a “second generation 3nm process” lines up perfectly in timing with TSMC’s second-generation 3nm process, N3E. The enhanced version of their 3nm process node is a bit of a sidegrade to the N3B process used by the M3 series of chips; N3E is not quite as dense as N3B, but according to TSMC it offers slightly better performance and power characteristics. The difference is close enough that architecture plays a much bigger role, but in the race for energy efficiency, Apple will take any edge they can get.

Apple’s position as TSMC’s launch-partner for new process nodes has been well-established over the years, and Apple appears to be the first company out the door launching chips on the N3E process. They will not be the last, however, as virtually all of TSMC’s high-performance customers are expected to adopt N3E over the next year. So Apple’s immediate chip manufacturing advantage, as usual, will only be temporary.

Apple’s early-leader status likely also plays into why we’re seeing the M4 now for iPads – a relatively low volume device at Apple – and not the MacBook lineup. At some point, TSMC’s N3E production capacity will catch up, and then-some. I won’t hazard a guess as to what Apple has planned for that lineup at that point, as I can’t really see Apple discontinuing M3 chips so quickly, but it also leaves them in an awkward spot having to sell M3 Macs when the M4 exists.

No die sizes have been quoted for the new chip (or die shots posted), but at 28 billion transistors in total, it’s only a marginally larger transistor count than the M3, indicating that Apple hasn’t thrown an excessive amount of new hardware into the chip. (ed: does anyone remember when 3B transistors was a big deal?)

M4 CPU Architecture: Improved ML Acceleration

Starting on the CPU side of things, we’re facing something of an enigma with Apple’s M4 CPU core design. The combination of Apple’s tight-lipped nature and lack of performance comparisons to the M3 means that we haven’t been provided much information on how the CPU designs compare. So if M4 represents a watershed moment for Apple’s CPU designs – a new Monsoon/A11 – or a minor update akin to the Everest CPU cores in A17, remains to be seen. Certainly we hope for the latter, but absent further details, we’ll work with what we do know.

Apple’s brief keynote presentation on the SoC noted that both the performance and efficiency cores implement improved branch predication, and in the case of performance cores, a wider decode and execution engine. However these are the same broad claims that Apple made for the M3, so this is not on its own indicative of a new CPU architecture.

What is unique to Apple’s M4 CPU claims however are “next-generation ML accelerators” for both CPU core types. This goes hand-in-hand with Apple’s broader focus on ML/AI performance in the M4, though the company isn’t detailing on just what these accelerators entail. With the NPU to do all of the heavy lifting, the purpose of AI enhancements on the CPU cores is less about total throughput/performance and more about processing light inference workloads mixed inside more general-purpose workloads without having to spend the time and resources firing up the dedicated NPU.

A grounded guess here would be that Apple has updated their poorly-documented AMX matrix units, which have been a part of the M series of SoCs since the beginning. However recent AMX versions already support common ML number formats like FP16, BF16, and INT8, so if Apple has made changes here, it’s not something simple and straightforward such as adding (more) common formats. At the same time if it is AMX, it’s a bit surprising to see Apple mention it at all, since they are otherwise so secretive about the units.

The other reasonable alternative would be that Apple has made some changes to their SIMD units within their CPUs to add common ML number formats, as these units are more directly accessible by developers. But at the same time, Apple has been pushing developers to use higher-level frameworks to begin with (which is how AMX is accessed), so this could really go either way.

In any case, whatever the CPU cores are that underpin M4, there is one thing that is certain: there are more of them. The full M4 configuration is 4 performance cores paired with 6 efficiency cores, 2 more efficiency cores than found on the M3. Cut-down iPad models get a 3P+6E configuration, while the higher-tier configurations get the full 4P+6E experience – so the performance impact there will likely be tangible.

Everything else held equal, the addition of two more efficiency cores shouldn’t massively improve on CPU performance over the M3’s 4P+4E configuration. But then Apple’s efficiency cores should not be underestimated, as even Apple’s efficiency cores are relatively powerful thanks to their use of out-of-order execution. Especially when fixed workloads can be held on the efficiency cores and not promoted to the performance cores, there’s a lot of room for energy efficiency gains.

Otherwise, Apple hasn’t published any detail performance graphs for the new SoC/CPU cores, so there’s little in the way of hard numbers to talk about. But the company is claiming that the M4 delivers 50% faster CPU performance than the M2. This presumably is for a multi-threaded workload that can leverage the M4’s CPU core count advantage. Alternatively, in their keynote Apple is also claiming that they can deliver M2 performance at half the power, which as a combination of process node improvements, architectural improvements, and CPU core count increases, seems like a reasonable claim.

As always, however, we’ll have to see how independent benechmarks pan out.

M4 GPU Architecture: Ray Tracing & Dynamic Caching Return

Compared to the CPU situation on the M4, the GPU situation is much more straightforward. Having just recently introduced a new GPU architecture in the M3 – a core type that Apple doesn’t iterate on as often as the CPU – Apple has all but confirmed that the GPU in the M4 is the same architecture that was found in the M3.

With 10 GPU cores, at a high level the configuration is otherwise identical to what was found on the M3. Whether that means the various blocks and caches are truly identical to the M3 remains to be seen, but Apple isn’t making any claims about the M4’s GPU performance that could be in any way interpreted as it being superior to the M3’s GPU. Indeed, the smaller form factor of the iPad and more limited cooling capabilities means that the GPU is going to be thermally constrained under any sustained workload to begin with, especially compared to what the M3 can do in an actively-cooled device like the 14-Inch MacBook Pro.

At any rate, this means the M4 comes with all of the major new architectural features introduced with the M3’s GPU: ray tracing, mesh shading, and dynamic caching. Ray tracing needs little introduction at this point, while mesh shading is a significant, next-generation means of geometry processing. Meanwhile, dynamic caching is Apple’s term for their refined memory allocation technique on M-series chips, which avoids over-allocating memory to the GPU from Apple’s otherwise unified memory pool.

GPU rendering aside, the M4 also gets the M3’s updated media engine block, which coming from the M2 is a relatively big deal for iPad uses. Most notably, the M3/M4’s media engine block added support for AV1 video decoding, the next-generation open video codec. And while Apple is more than happy to pay royalties on HEVC/H.265 to ensure it’s available within their ecosystem, the royalty-free AV1 codec is expected to take on a lot of significance and use in the coming years, leaving the iPad Pro in a better position to use the newest codec (or, at least, not have to inefficiently decode AV1 in software).

What is new to the M4 on the display side of matters, however, is a new display engine. The block responsible for compositing images and driving the attached displays on a device, Apple never gives this block a particularly large amount of attention, but when they do make updates to it, it normally comes with some immediate feature improvements.

The key change here seems to be enabling Apple’s new sandwiched “tandem” OLED panel configuration, which is premiering in the iPad Pro. The iPad’s Ultra Retina XDR display places two OLED panels directly on top of each other in order allow for a display that can cumulatively hit Apple’s brightness target of 1600 nits – something that a single one of their OLED panels is apparently incapable of doing. This in turn requires a display controller that knows how to manipulate the panels, not just driving a mirrored set of displays, but accounting for the performance losses that would stem from having one panel below another.

And while not immediately relevant to the iPad Pro, it will be interesting to see if Apple used this opportunity to increase the total number of displays the M4 can drive, as vanilla M-series SoCs have normally been limited to 2 displays, much to the consternation of MacBook users. The fact that the M4 can drive the tandem OLED panels and an external 6K display on top of that is promising, but we’ll see how this translates to the Mac ecosystem if and when the M4 lands in a Mac.

M4 NPU Architecture: Something New, Something Faster

Arguably Apple’s biggest focus with the M4 SoC is the company’s NPU, otherwise known as their neural engine. The company has been shipping a 16-core design since the M1 (and smaller designs on the A-series chips for years before that), each generation delivering a modest increase in performance. But with the M4 generation, Apple says they are delivering a much bigger jump in performance.

Still a 16-core design, the M4 NPU is rated for 38 TOPS, just over twice that of the 18 TOPS neural engine in the M3. And coincidentally, only a few TOPS more than the neural engine in the A17. So as a baseline claim, Apple is pitching the M4 NPU as being significantly more powerful than what’s come in the M3, never mind the M2 that powers previous iPads – or going even farther back, 60x faster than the A11’s NPU.

Unfortunately, the devil is (once again) in the details here as Apple isn’t listing the all-important precision information – whether this figure is based on INT16, INT8, or even INT4 precision. As the precision de jure for ML inference right now, INT8 is the most likely option, especially as this is what Apple quoted for the A17 last year. But freely mixing precisions, or even just not disclosing them, is headache-inducing to say the least. And it makes like-for-like specification comparisons difficult.

In any case, even if most of this performance improvement comes from INT8 support versus INT16/FP16 support, the M4 NPU is slated to deliver significant performance improvements to AI performance, similar to what’s already happened with the A17. And as Apple was one of the first chip vendors to ship a consumer SoC with what we now recognize as an NPU, the company isn’t afraid to beat its chest a bit on the matter, especially comparing it to what is going on in the PC realm. Especially as Apple’s offering is a complete hardware/software ecosystem, the company has the advantage of being able mold their software around using their own NPU, rather than waiting for the killer app to be invented for it.

M4 Memory: Adopting Faster LPDDR5X

Last, but certainly not least, the M4 SoC is also getting a notable improvement in its memory capabilities. Given the memory bandwidth figures Apple is quoting for the M4 – 120GB/second – all signs point to them finally adopting LPDDR5X for their new SoC.

The mid-generation update to the LPDDR5 standard, LPDDR5X allows for higher memory clockspeeds than LPDDR5, which topped out at 6400 MT/second. While LPDDR5X is available at speeds up to 8533 MT/second right now (and faster speeds to come), based on Apple’s 120GB/second figure for the M4, this puts the memory clockspeed at roughly LPDDR5X-7500.

Since the M4 is going into an iPad first, for the moment we don’t have proper idea of its maximum memory capacity. The M3 could house up to 24GB of memory, and while it’s highly unlikely Apple has regressed here, there’s also no sign whether they’ve been able to increase it to 32GB, either. In the meantime, the iPads Pro will all either come with 8GB or 16GB of RAM, depending on the specific model.

2024 M4 iPad Pros: Coming Next Week

Wrapping things up, in traditional Apple fashion, prospective tablet buyers will get the chance to see the M4 in action sooner than later. The company has already opened up pre-orders for the new iPad Pros, with the first deliveries slated to take place next week, on May 15th.

Apple is offering two sizes of the 2024 iPad Pro: 11 inches and 13 inches. Screen size aside, both sizes are getting access to the same M4 and memory configurations. 256GB/512GB models get a 3P+6E core CPU configuration and 8GB of RAM, meanwhile the 1TB and 2TB models get a fully-enabled M4 SoC with a 4P+6E CPU configuration and 16GB of RAM. The GPU configuration on both models is identical, with 10 GPU cores.

Pricing starts at $999 for the 256GB 11-inch model and $1299 for the 256GB 13-inch model. Meanwhile a max-configuration 13-inch model with 2TB of storage, Apple’s nano-texture matte display, and cellular capabilities will set buyers back a cool $2599.

]]>
https://www.anandtech.com/show/21387/apple-announces-m4-soc-latest-and-greatest-starts-on-ipad-pro Tue, 07 May 2024 14:00:00 EDT tag:www.anandtech.com,21387:news
VESA Rolls Out DisplayHDR 1.2 Spec: Adding Color Accuracy, Black Crush, & Wide-Color Gamuts For All Ryan Smith VESA this morning is taking the wraps off of the next iteration of its DisplayHDR monitor certification standard, DisplayHDR 1.2. Designed to raise the bar on display quality, the updated DisplayHDR conformance test suite imposes new luminance, color gamut, and color accuracy requirements that extend across the entire spectrum of DisplayHDR tiers – including the entry-level DisplayHDR 400 tier. With vendors able to begin certifying displays for the new standard immediately, the display technology group is aiming to address the advancements in the display technology market over the last several years, while enticing display manufacturers to make use of them to deliver better desktop and laptop displays than before.

Altogether, the DisplayHDR 1.2 is easily the biggest update to the standard since it launched in 2017, and in many respects the first significant overhaul to the standard since that time as well. DisplayHDR 1.2 doesn’t add any new tiers to the standard (e.g. 1400), instead it’s all about increasing and/or tightening the specifications at each of its tier levels. In short, the VESA is raising the bar for displays to reach DisplayHDR compliance, requiring a higher level of performance and testing for more corner cases that trip up lesser displays.

All of these changes are coming, in turn, after over half a decade of technology improvements in the display space. Whereas even the original DisplayHDR 400 requirements represented a modestly premium display in 2017, nowadays even sub-$200 displays can hit those relatively loose requirements as panels and backlighting solutions have improved. And even at the high-end of things, full array local dimming (FALD) displays have gone from hundreds of zones to thousands. All of which has finally pushed VESA’s member companies into allowing higher standards going forward.

]]>
https://www.anandtech.com/show/21385/vesa-rolls-out-displayhdr-12-spec-wide-color-gamuts-for-all Tue, 07 May 2024 12:00:00 EDT tag:www.anandtech.com,21385:news
Samsung Tapes Out Its First 3nm Smartphone SoC, Gets A Boost From Synopsys AI-Enabled Tools Anton Shilov

This week Samsung Electronics and Synopsys announced that Samsung has taped out its first mobile system-on-chip on Samsung Foundry's 3nm gate-all-around (GAA) process technology. The announcement, coming from electronic design automation Synopsys, further notes that Samsung used the Synopsys.ai EDA suite to place-n-route the layout and verify design of the SoC, which in turn enabled higher performance.

Samsung's unnamed high-performance mobile SoC relies on 'flagship' general-purpose CPU and GPU architectures as well as various IP blocks from Synopsys. SoC designers used Synopsys.ai EDA software, including the Synopsys DSO.ai to fine-tune design and maximize yields as well as Synopsys Fusion Compiler RTL-to-GDSII solution to achieve higher performance, lower power, and optimize area (PPA).

And while the news that Samsung has developed a high-performance SoC using the Synopsys.ai suite is important, there is another, even more important dimension to this announcement: this means that Samsung has finally taped out an advanced smartphone application processor on its cutting-edge 3nm GAAFET process.

Although Samsung Foundry has been producing chips on its GAA-equipped SF3E (3 nm-class, 'early' node) process for almost two years now, Samsung Electronics has never used this technology for its own system-on-chips for smartphones or other complex devices. To date, SF3E has been used mainly for cryptocurrency mining chips, presumably due to the inevitable early teething and yield issues that come with being the industry's first commercial GAAFET process.

For now, Samsung isn't disclosing what specific process node is being used for the SoC; the official Samsung/Synposys announcement only notes that it's for a GAA process node. Along with their first-generation 3nm-class SF3E, Samsung Foundry has a considerably more sophisticated SF3 manufacturing technology that offers numerous improvements over SF3E, and is due to be used for mass production in the coming quarters. Given the timing of the announcement, the reasonable bet is that they're using SF3.

As for Samsung's tooling partnership with Synopsys, the latter's tools are being credited for delivering some significant performance improvements to the chip's design. In particular, the two firms are crediting those tools for improving the chip's peak clockspeed by 300MHz while cutting down on dynamic power usage by 10%. To accomplish that, Samsung Electronics' SoC developers used design partitioning optimization, multi-source clock tree synthesis (MSCTS), and smart wire optimization to reduce signal interference, along with a simpler hierarchical approach. And by using Synopsys Fusion Compiler, they did all this while being able to skip weeks of 'manual' design work, according to the joint press release.

"Our longstanding collaboration has delivered leading-edge SoC designs," said Kijoon Hong, vice president of SLSI at Samsung Electronics. "This is a remarkable milestone to successfully achieve the highest performance, power and area on the most advanced mobile CPU cores and SoC designs in collaboration with Synopsys. Not only have we demonstrated that AI-driven solutions can help us achieve PPA targets for even the most advanced GAA process technologies, but through our partnership we have established an ultra-high-productivity design system that is consistently delivering impressive results."

]]>
https://www.anandtech.com/show/21383/samsung-tapes-out-its-first-3nm-smartphone-soc-using-synopsys-aienabled-tools Fri, 03 May 2024 16:30:00 EDT tag:www.anandtech.com,21383:news
SK hynix Reports That 2025 HBM Memory Supply Has Nearly Sold Out Anton Shilov

Demand for high-performance processors for AI training is skyrocketing, and consequently so is the demand for the components that go into these processors. So much so that SK hynix this week is very publicly announcing that the company's high-bandwidth memory (HBM) production capacity has already sold out for the rest of 2024, and even most of 2025 has already sold out as well.

SK hynix currently produces various types of HBM memory for customers like Amazon, AMD, Facebook, Google (Broadcom), Intel, Microsoft, and, of course, NVIDIA. The latter is an especially prolific consumer of HBM3 and HBM3E memory for its H100/H200/GH200 accelerators, as NVIDIA is also working to fill what remains an insatiable (and unmet) demand for its accelerators.

As a result, HBM memory orders, which are already placed months in advance, are now backlogging well into 2025 as chip vendors look to secure supplies of the memory stacks critical to their success.

This has made SK hynix the secnd HBM memory vendor in recent months to announce that they've sold out into 2025, following an earlier announcement from Micron regarding its HBM3E production. But of the two announcements, SK hynix's is arguably the most significant yet, as the South Korean firm's HBM production capacity is far greater than Micron's. So while things were merely "interesting" with the smallest of the Big Three memory manufacturers being sold out into 2025, things are taking a more concerning (and constrained) outlook now that SK hynix is as well.

SK hynix currently controls roughly 46% - 49% of HBM market, and its share is not expected to drop significantly in 2025, according to market tracking firm TrendForce. By contrast, Micron's share on HBM memory market is between 4% and 6%. Since HBM supply of both companies is sold out through the most of 2025, we're likely looking at a scenario where over 50% of the industry's total HBM3/HBM3E supply for the coming quarters is already sold out.

This leaves Samsung as the only member of the group not to comment on HBM demand so far. Though with memory being a highly fungible commodity product, it would be surprising if Samsung wasn't facing similar demand. And, ultimately, all of this is pointing towards the indusry entering an HBM3 memory shortage.

Separately, SK hynix said that it is sampling 12-Hi 36GB HBM3E stacks with customers and will begin volume shipments in the third quarter.

]]>
https://www.anandtech.com/show/21382/sk-hynixs-hbm-memory-supply-sold-out-through-2025 Thu, 02 May 2024 11:00:00 EDT tag:www.anandtech.com,21382:news
The XPG Core Reactor II VE 850W PSU Review: Our First ATX 3.1 Power Supply E. Fylladitakis Just over 18 months ago, Intel launched their significantly revised ATX v3.0 power supply standard, and with it, the 600 Watt-capable 12VHPWR cable to power video cards and other high-drain add-in cards. The release of the standard came with a lot of fanfare and excitement – the industry was preparing for a future where even flagship video cards could go back to being powered by a single cable – but shortly after, things became exciting again for all the wrong reasons.

The new 12VHPWR connector proved to be less forgiving of poor connections between cables and devices than envisioned. With hundreds of watts flowing through the relatively small pins – and critically, insufficient means to detect a poor connection – a bad connection could result in a thermal runaway scenario, i.e. a melted connector. And while the issue was an edge case overall, affecting a fraction of a fraction of systems, even a fraction is too much when you're starting from millions of PCs, never mind the unhappy customers with broken video cards.

So the PC industry is taking a mulligan on the matter, quickly revising the ATX specification and the 12VHPWR connector to fix their design flaws. In its place we have the new ATX v.3.1 power supply specification, as well as the associated 12V-2×6 connector, the combination of which are intended to serve the same goals, but with far less of a chance of errant electricity causing damage.

Ultimately, the combination of the two new standards has required backwards-compatible changes on both the device (video card) side, as well as the power supply side. And as a result, power supply manufacturers are now in the process of releasing ATX v3.1-compliant PSUs that implement these revisions. For PSU vendors, the changes are relatively trivial overall, but they are none the less important changes that for multiple reasons, they are making sure to promote.

Getting down to business, the first ATX v3.1 power supply to enter our testing labs comes from ADATA sub-brand XPG, a prolific player in the PSU market. XPG recently expanded its product lineup with the introduction of the Core Reactor II VE series, the company's first foray into ATX 3.1-compliant PSUs. As a direct successor of the Core Reactor II series, the Core Reactor II VE is a relatively simple 80Plus Gold unit that distinguishes itself with its straightforward design, aimed at providing steady performance without the high expense.

In today’s review, we are taking a look at the 850W version of the Core Reactor II VE series, which is, for the time being, the most powerful ATX 3.1 unit XPG offers.

]]>
https://www.anandtech.com/show/21367/the-xpg-core-reactor-ii-ve-850w-atx-31-psu-review Thu, 02 May 2024 10:00:00 EDT tag:www.anandtech.com,21367:news
AMD Zen 5 Status Report: EPYC "Turin" Is Sampling, Silicon Looking Great Anton Shilov

As part of AMD's Q1'2024 earnings announcement this week, the company is offering a brief status update on some of their future products set to launch later this year. Most important among these is an update on their Zen 5 CPU architecture, which is expected to launch for both client and server products later this year.

Highlighting their progress so far, AMD is confirming that EPYC "Turin" processors have begun sampling, and that these early runs of AMD's next-gen datacenter chips are meeting the company's expectations.

"Looking ahead, we are very excited about our next-gen Turin family of EPYC processors featuring our Zen 5 core," said Lisa Su, chief executive officer of AMD, at the conference call with analysts and investors (via SeekingAlpha). "We are widely sampling Turin, and the silicon is looking great. In the cloud, the significant performance and efficiency increases of Turin position us well to capture an even larger share of both first and third-party workloads."

Overall, it looks like AMD is on-track to solidify its position, and perhaps even increase its datacenter market share with its EPYC Turin processors. According to AMD, the company's server partners are developing a 30% larger number of designs for Turin than they did Genoa. This underscores how AMD's partners are preparing for even more market share growth on the back of AMD's ongoing success, not to mention the improved performance and power efficiency that the Zen 5 architecture should offer.

"In addition, there are 30% more Turin platforms in development from our server partners, compared to 4th Generation EPYC platforms, increasing our enterprise and with new solutions optimized for additional workloads," Su said. "Turin remains on track to launch later this year."

AMD's EPYC 'Turin' processors will be drop-in compatible with existing SP5 platforms (i.e., will come in an LGA 6096 package), which will facilitate its faster ramp and adoption of the platform both by cloud giants and server makers. In addition, AMD's next-generation EPYC CPUs are expected to feature more than 96 cores and a more versatile memory subsystem.

]]>
https://www.anandtech.com/show/21380/amd-zen-5based-epyc-turin-is-sampling-silicon-looking-great Wed, 01 May 2024 18:00:00 EDT tag:www.anandtech.com,21380:news
PCI-SIG Completes CopprLink Cabling Standard: PCIe 5.0 & 6.0 Get Wired Ryan Smith

The PCI-SIG sends word over this morning that the special interest group has completed their development efforts on the group’s new PCI-Express cabling standard, CopprLink. Designed to go hand-in-hand with PCIe 5.0 and PCIe 6.0, CopprLink defines both internal and external copper cabling for the latest PCIe standards, giving system vendors and assemblers the ability to use wires to connect devices within a system, or even whole systems.

The CopprLink standard is, in practice, a pair of standards sharing the same brand-name under the PCI-SIG umbrella. The internal standard, “CopprLink Internal Cable”, is designed to allow for a new generation of PCIe cables up to 1 meter in length that are capable of sustaining PCIe 5.0 and PCIe 6.0 signaling. Internal CopprLink effectively supplants a host of older internal PCIe cabling standards (including the abandoned OCuLink), which were originally designed for earlier generations of PCIe signaling.

At a high level, internal CopprLink is intended to provide not only host-to-device connectivity, but even more transparent backhaul applications such as motherboard-to-backplane connectivity, and unique applications such as chip-to-chip PCIe connections. In other words, CopprLink allows for cabled PCIe to be used in almost any situation where a PCIe connection needs to be established within a system. Strictly speaking, CopprLink doesn't replace the PCIe CEM connector in any way – but the relatively thick copper cables have less signal loss than PCB traces, making a cabled standard extremely useful even for internal connections. PCI-SIG sees CopprLink cables taking hold in the storage and data center markets, product categories where we already see PCIe cabling in use today.

The companion connector standard for internal CopprLink is the SNIA-developed SFF-TA-1016 connector, which bears more than a passing resemblance to the widely-used SFF-8654 (SlimSAS) connector. SFF-TA-1016 is available in x4, x8, and x16 configurations, and while the PCI-SIG doesn’t go so far as to defining widths within their own standard, the connectors available paint a clear picture of the options at hand. Internal CopprLink x4 should be especially popular with storage, as we already see today.


Top: SFF-TA-1016 Family of Connectors (Figure 4-1, Image Courtesy SNIA)
Bottom: Sample SFF-TA-1016 x4 Contact Plug and Recepticle (Figure 4-2, Image Courtesy SNIA)

Meanwhile, the group has also developed an external cabling standard to cover those same PCIe 5.0/6.0 data rates. External CopprLink cables can go up to 2 meters, allowing for board-to-board connections within a rack, and even short rack-to-rack PCIe connections.

The external version of CopprLink also uses a more robust connector, relying on SNIA’s SFF-TA-1032 standard. Like internal/1016, this is available with x4, x8, and x16 configurations, using 44, 68, and 120 positions/pins respectively. The PCI-SIG is expecting this version of the standard to be primarily adopted by the AI/Machine Learning markets, which need to move heaps of data between systems. Notably, however, they don’t really expect the storage market to make use of this spec – instead, they’ll be served by an updated version of the classic PCI Express External Cabling standard.


SFF-TA-1032 x16 Plug and Connector (Figure 4-1, Image Courtesy SNIA)

Finally, a bit farther out on the group’s roadmap, PIG-SIG is also reiterating that they’re working on a new optical cabling standard as well. The workgroup for this project was established in 2023, so the project is still in its early days. Notably, the forthcoming optical standard is intended to be optical technology-agnostic, allowing for PCIe to be paired with a variety of optical technologies.

In the meantime, with the internal and external CopprLink standards completed, the PCI-SIG is hoping to quickly move this cabling into production. Since these are solely cabling standards – and thus don’t require intensive development efforts such as new controllers or the like – the group is hoping that their members will have something to show off in time for the group’s developer conference this summer, or the Flash Memory Summit in August. After which, hardware vendors should be able to deploy the new cables relatively quickly.

]]>
https://www.anandtech.com/show/21379/pcisig-completes-copprlink-cabling-standard-pcie-50-60-get-wired Wed, 01 May 2024 10:00:00 EDT tag:www.anandtech.com,21379:news
Samsung Foundry Update: 2nm Unveil in June, Second-Gen SF3 3nm Hits Production This Year Anton Shilov

As part of Samsung's Q1 earnings announcement, the company has outlined some of its foundry unit's key plans for the rest of the year. The company has confirmed that it remains on track to meeting its goal of starting mass production of chips on its SF3 (3 nm-class, 2nd Generation) technology in the second half of the year. Meanwhile in June, Samsung Foundry will formally unveil its SF2 (2 nm-class) process technology, which will offer a mix of performance and efficiency enhancements. Finally, the company the company is preparing a variation of its 4 nm-class technology for integration into stacked 3D designs.

SF2 To Be Unveiled In June

Samsung plans to disclose key details about its SF2 fabrication technology at the VLSI Symposium 2024 on June 19. This will be the company's second major process node based upon gate-all-around (GAA) multi-bridge channel field-effect transistors (MBCFET). Improving over its predecessor, SF2 will feature a 'unique epitaxial and integration process,' which will give the process node higher performance and lower leakage than traditional FinFET-based nodes (though Samsung isn't disclosing the specific node they're comparing it to).

Samsung says that SF2 increases performance of narrow transistors by 29% for N-type and 46% for P-type, and wide transistors by 11% and 23% respectively. Moreover, it reduces transistor global variation by 26% compared to FinFET technology, and cuts product leakage by approximately 50%. This process also sets the stage for future advancements in technology through enhanced design technology co-optimization (DTCO) collaboration with its customers.

One thing that Samsung has not mentioned in context of SF2 is backside power delivery, so at least for the moment, there is no indication that Samsung will be adopting this next-gen power routing feature for SF2.

Samsung says that the design infrastructure for SF2 – the PDK, EDA tools, and licensed IP – will be finalized in the second quarter of 2024. Once this happens, Samsung's chip development partners will be able to begin designing products for this production node. Meanwhile, Samsung is already working with Arm to co-optimize Arm's Cortex cores for the SF2 process.

SF3: On Track for 2H 2024

As the first fab to introduce a GAAFET-based node, Samsung has been on the cutting edge of chip construction. At the same time, however, that has also meant that they're the first fab to encounter and solve the inevitable teething issues that come with such a major transistor design change. Consequently, while Samsung's first-generation SF3E process technology has been in production for a little less than two years now, the only publicly-disclosed chips made on the process so far have been relatively small cryptocurrency mining chips – exactly the kind of pipecleaner parts that do well on a new process node.

But with that experience in hand, Samsung is preparing to move on to making bigger and better chips with GAAFETs. As part of their earnings announcements, the company has confirmed that their updated SF3 node, which was introduced last year, remains on schedule to enter production in the second half of 2024.

A more mature product from the get-go, SF3 is being prepared to be used for building larger processors, including datacenter products. Compared to its direct predecessor, SF4, SF3 promises a 22% performance boost at the same power and transistor count, or a 34% lower power at the same frequency and complexity, as well as a 21% logic area reduction. In general, Samsung pins a lot of hopes on this technology, as it's this generation of their 3nm-class technology that is poised to compete against TSMC's N3B and N3E nodes.

SF4: Ready for 3D Stacking

Finally, Samsung is also preparing a variant of their final FinFET technology node, SF4, for use in 3D chiplet stacking. As transistor density improvements have continued to slow, 3D chip stacking has emerged as a way to keep boosting overall chip performance, especially with modern, multi-tile processor designs.

Details on this node are limited, but it would seem that Samsung is making some changes to account/optimize for using SF4-fabbed chiplets in a 3D-stacked design, where chips need to be able to communicate both up and down. According to the company's Q1 financial report, Samsung expects to complete their preparatory work on the chip-stacking SF4 variant during the current quarter (Q2).

Sources: Samsung, Samsung

]]>
https://www.anandtech.com/show/21377/samsung-foundry-update-2nm-unveil-in-june-2nd-gen-3nm-hits-production-this-year Wed, 01 May 2024 08:00:00 EDT tag:www.anandtech.com,21377:news
TSMC Readies 8x Reticle Super Carrier Interposer For Next-Gen Chips Twice as Large As Today's Anton Shilov

TSMC is no stranger to building big chips. Besides the ~800mm2 reticle limit of their normal logic processes, the company already produces even larger chips by fitting multiple dies on to a single silicon interposer, using their chip-on-wafer-on-substrate (CoWoS) technology. But even with current-gen CoWoS allowing for interposers up to 3.3x TSMC's reticle limit, TSMC plans to build bigger still in response to projected demand from the HPC and AI industries. To that end, as part of the company's North American Technology Symposium last week, TSMC announced that they are developing the means of building super-sized interposers that can reach over 8x the reticle limit.

TSMC's current-generation CoWoS technology allows for building interposers up to 2831 mm2 and the company is already seeing customers come in with designs that run up to those limits. Both AMD's Instinct MI300X accelerator and NVIDIA's forthcoming B200 accelerator are prime examples of this, as they pack huge logic chiplets (3D stacked in case of AMD's product) and eight HBM3/HBM3E memory stacks in total. The total space afforded by the interposer gives these processors formidable performance, but chip developers want to go more powerful still. And to get there as quickly as possible, they'll need to go bigger as well in order to incorporate more logic chiplets and more memory stacks.

For their next-generation CoWoS product that's set to launch in 2026, TSMC plans to release CoWoS_L, which will offer a maximum interposer size of approximately 5.5 times that of a photomask, totaling 4719 mm² altogether. This next generation package will support up to 12 HBM memory stacks and will necessitate a larger substrate measuring at 100×100 mm. Coupled with process node improvements over the next few years, and TSMC expects chips based on this generation of CoWoS to offer better than 3.5x the compute performance of current-generation CoWoS chips.

Farther down the line, in 2027 TSMC intends introduce a version of CoWoS that allows for interposers up to 8 times larger than the reticle limit. This will offer an ample 6,864 mm² of space for chiplets on a substrate that measures 120×120 mm. TSMC envisions leveraging this technology for designs that integrate four stacked systems-on-integrated chips (SoICs), with 12 HBM4 memory stacks and extra I/O dies. TSMC roughly projects that this will enable chip designers to once again double performance, producing chips that surpass 7x the performance of current-generation chips.

Of course, building such large chips will come with its own set of consequences, above and beyond what TSMC will have to deal with. Enabling chip designers to build such grand processors is going to impact system design, as well as how datacenters accommodate these systems. TSMC's 100×100mm substrate will be riding right up to the limit of the OAM 2.0 form factor, whose modules measure 102×165mm to begin with. And if that generation of CoWoS doesn't break the current OAM form factor, then 120×120mm chips certainly will. And, of course, all of that extra silicon requires additional power and cooling, which is why we're already seeing hardware vendors prepare for how to cool multi-kilowatt chips by investigating liquid and immersion cooling.

Ultimately, even if Moore's Law has slowed to a crawl in terms of delivering transistor density improvements, CoWoS offers an out for producing chips with an ever-larger number of transistors. So with TSMC set to offer interposers and substrates with over twice the area of today's solutions, big chips intended for HPC systems are only going to continue to grow in both performance and size.

Related Reading

]]>
https://www.anandtech.com/show/21375/tsmc-readies-8x-reticle-size-super-carrier-interposer Tue, 30 Apr 2024 09:00:00 EDT tag:www.anandtech.com,21375:news
In Light of Stability Concerns, Intel Issues Request to Motherboards Vendors to Actually Follow Stock Power Settings Gavin Bonshor & Ryan Smith Across the internet, from online forums such as Reddit to various other tech media outlets, there's a lot of furor around reports of Intel's top-end 14th and 13th Gen K series of processors running into stability issues. As Intel's flagship chips, these parts come aggressively clocked in order to maximize performance through various implementations of boost and turbo, leaving them running close to their limits out of the box. But with high-end motherboards further goosing these chips to wring even more performance out of them, it would seem that the Intel desktop ecosystem has finally reached a tipping point where all of these efforts to boost performance have pushed these flagship chips to unstable conditions. To that end, Intel has released new gudiance to its consumer motherboard partners, strongly encouraging them to actually implment Intel's stock power settings, and to use those baseline settings as their out-of-the-box default.

While the underlying conditions are nothing new – we've published stories time and time again about motherboard features such as multi-core enhancement (MCE) and raised power consumption limits that seek to maximize how hard and how long systems are able to turbo boost – the issue has finally come to a head in the last couple of months thanks to accumulating reports of system instability with Intel's 13900K and 14900K processors. These instability problems are eventually solved by either tamping down on these motherboard performance-boosting features – bringing the chips back down to something closer to Intel's official operating parameters – or downclocking the chips entirely.

Intel first began publicly investigating the matter on the 27th of February, when Intel's Communications Manager, Thomas Hannaford, posted a thread on Intel's Community Product Support Forms titled "Regarding Reports of 13th/14th Gen Unlocked Desktop Users Experiencing Stability Issues". In this thread, Thomas Hannaford said, "Intel is aware of reports regarding Intel Core 13th and 14th Gen unlocked desktop processors experiencing issues with certain workloads. We're engaged with our partners and are conducting analysis of the reported issues. If you are experiencing these issues, please reach out to Intel Customer Support for further assistance in the interim."

Since that post went up, additional reports have been circulating about instability issues across various online forums and message boards. The underlying culprit has been theorized to be motherboards implementing an array of strategies to improve chip performance, including aggressive multi-core enhancement settings, "unlimited" PL2 turbo, and reduced load line calibration settings. At no point do any of these settings overclock a CPU and push it to a higher clockspeed than it's validated for, but these settings do everything possible to keep a chip at the highest clockspeed possible at all times – and in the process seem to have gone a step too far.


From "Why Intel Processors Draw More Power Than Expected: TDP and Turbo Explained"

We wrote a piece initially covering multi-core enhancement in 2012, detailing how motherboard manufacturers try to stay competitive with each other and leverage any headroom within the silicon to output the highest performance levels. And more recently, we've talked about how desktop systems with Intel chips are now regularly exceeding their rated TDPs – sometimes by extreme amounts – as motherboard vendors continue to push them to run as hard as possible for the best performance.

But things have changed since 2012. At the time, this wasn't so much of an issue, as overclocking was actually very favorable to increasing the performance of processors. But in 2024 with chips such as the Intel Core i9-14900K, we have CPUs shipping with a maximum turbo clock speed of 6.0 GHz and a peak power consumption of over 400 Watts, figures that were only a pipe dream a decade ago.

Jumping to the present time, over the weekend Intel released a statement about the matter to its partners, outlining their investigation so far and their suggestions/requests to their partners. That statement was quickly leaked to the press, with Igorslab.de and others breaking the news. Since then, we've been able to confirm through official sources that this is a real and accurate statement from Intel.

This statement reads as follows:

Intel® has observed that this issue may be related to out of specification operating conditions resulting in sustained high voltage and frequency during periods of elevated heat.

Analysis of affected processors shows some parts experience shifts in minimum operating voltages which may be related to operation outside of Intel® specified operating conditions.

While the root cause has not yet been identified, Intel® has observed the majority of reports of this issue are from users with unlocked/overclock capable motherboards.

Intel® has observed 600/700 Series chipset boards often set BIOS defaults to disable thermal and power delivery safeguards designed to limit processor exposure to sustained periods of high voltage and frequency, for example:

– Disabling Current Excursion Protection (CEP)
– Enabling the IccMax Unlimited bit
– Disabling Thermal Velocity Boost (TVB) and/or Enhanced Thermal Velocity Boost (eTVB)
– Additional settings which may increase the risk of system instability:
– Disabling C-states
– Using Windows Ultimate Performance mode
– Increasing PL1 and PL2 beyond Intel® recommended limits

Intel® requests system and motherboard manufacturers to provide end users with a default BIOS profile that matches Intel® recommended settings.

Intel® strongly recommends customer's default BIOS settings should ensure operation within Intel's recommended settings.

In addition, Intel® strongly recommends motherboard manufacturers to implement warnings for end users alerting them to any unlocked or overclocking feature usage.

Intel® is continuing to actively investigate this issue to determine the root cause and will provide additional updates as relevant information becomes available.

Intel® will be publishing a public statement regarding issue status and Intel® recommended BIOS setting recommendations targeted for May 2024.

One subtle undertone in this statement is that everything seems to revolve around motherboards, specifically their default settings. Looking to clarify matters, Intel has told me today that they aren't blaming motherboard vendors in the above statement to partners and OEMs. However, having had experience with multiple Z790 motherboards with Intel's Core i9-14900K, we know each vendor has a different idea of what the word 'default' means – and that none of them involve strictly sticking to Intel's own suggested values. These profiles within the firmware unlock power constraints to a very high level and go above and beyond what Intel recommends. One example is ICCMAX, which Intel recommends at 400A or below, whereas multiple Z790 motherboards will greatly exceed this value out of the box.

Impressing buyers and outperforming the competitors has become integral to every motherboard manufacturer's strategy, thanks to the highly competitive and commoditized nature of the motherboard market. As a result, the user experience is sometimes relegated to a low-priority goal. And while this focus on performance and overclocking features plays well in reviews and to overclockers and tinkerers looking to push their CPU to its very limit, as we are now seeing, it seems to have come at the cost of out-of-the-box stability, with overly-aggressive settings leading to systems being unstable even at default settings.

Especially concerning here is what all of this means for a CPU's VCore voltage, which is another aspect of system performance that motherboard vendors have complete control over. With the need to quickly modulate the VCore voltage to keep up with the load on the processor – to keep it high enough for stability, but not allow it to spike so high as to risk damage – it's a careful balancing act for motherboard vendors even when they're not trying to squeeze out every last bit of performance from a CPU. And when they are trying to squeeze out every last bit, then VCore is something to minimize in order to improve how long and hard a CPU can turbo, pushing a chip further towards potential instability.

Pivoting to some real-world data highlighting these potential issues, when we reviewed the Intel Core i9-14900K, Intel's flagship Raptor Lake Refresh (RPL-R) processor, we tested with the default settings on both of our Z790 motherboards. From the above data, we can see the MSI MEG Z790 Ace Max was drawing up to 415 W when using Linx to place a very heavy workload on the chip. We also ran the same chip and workload on ASRock's Z790 Taichi Carrara to provide additional data points, where we found that it's power consumption maxed out at 375 W, around 10% lower than the MSI board.

In both cases, this is much higher than Intel's official PL2 limit for the Intel Core i9-14900K, which says that the chip should top out at 253 W for moderate periods of load. But, as we've seen time and time again, the official TDP ratings from Intel do not mean much to high-end motherboards, which almost universally default to higher settings. Motherboard vendors want to be competitive, and as such, higher default power settings allow vendors to claim that they deliver better performance than their rivals.

As further evidence of this, check out some of our recent motherboard reviews. I have assembled a small list of links to those reviews, where we've seen excessive CPU voltage or power consumption (or more often, both) when using the default settings on each motherboard, in each of the below reviews we see much higher power levels than Intel's official TDP values, which over the last several years we've come to expect. Still, some can be too high, especially with an already close-to-the-limit Core i9-14900K.

We have been communicating with Intel for most of the day to get official answers to what's happening. To that end, we have received an official statement from Intel, which reads as follows:

The recently publicized communications between Intel and its motherboard partners regarding motherboard settings and Intel Core 13th & 14th Gen K-SKU processors is intended to provide guidance on Intel recommended default settings. We are continuing to investigate with our partners the recent user reports of instability in certain workloads on these processors.

This BIOS default settings guidance is meant to improve stability for currently installed processors while Intel continues investigating root cause, not ascribe blame to Intel's partners:

Intel Raptor Lake (13th)/Raptor Lake Refresh (14th) Gen K Series SKU
Official Recommendations
Parameter/Feature
(In BIOS/Software Settings)
Value/Setting
Current Excursion Protection (CEP) Enable
Enhanced Thermal Velocity Boost (eTVB) Enable
Thermal Velocity Boost (TVB) Enable
TVB Voltage Optimizations Enable
ICCMAX Unlimited Bit Disable
TjMAX Offset 0
C-states Enable
ICCMAX Varies, Never >400A*
ICCMAX_App Varies*
Power Limits (PL's) Varies*

* Please see the 13th Generation Intel® Core™ and Intel® Core™ 14th Generation Processors datasheet for more information

Intel continues to work with its partners to develop appropriate mitigations going forward.

Intel's official statement to us, which is likely their standpoint for the general public, highlights a list of recommended BIOS and software settings, such as those found in Intel's Extreme Tuning Utility (XTU). There's no mention of specific motherboard vendors or models, but the above settings should alleviate crashing and instability issues by preventing motherboards from pushing CPUs too hard.

It remains to be seen just how motherboard vendors will opt to address the issue, as all of the motherboard vendors we contacted today didn't have anything official to say about the matter. With that said, however, a few motherboard vendors have recently released a wave of new BIOSes, adding a new profile called "Intel Baseline" or similar. In all cases, these new BIOSes seem to do exactly what it says on the label, configuring the system to run at Intel's actual, suggested stock settings, and thus ensuring the stability of system in exchange for reduced performance.

With that said, these new Intel baseline settings are still not being used as the default settings for high-end motherboards. So the out-of-the-box user experience is still for MCE and other features to be enabled, pushing these processors to their performance limit. Users who actually want baseline performance – and the guaranteed stability it comes with – will still need to go into the BIOS and explicitly select this profile.

Ultimately, given the spec-defying state of high-end motherboards over the last decade, this is a badly-needed improvement. But still, as Intel has yet to wrap up their root cause investigation and issue formal guidance to consumers, we're not quite to the end of this saga just yet. There are still some developments to come, as we expect to hear more in May.

]]>
https://www.anandtech.com/show/21374/intel-issues-request-to-mobo-vendors-to-use-stock-power-settings-for-stability Mon, 29 Apr 2024 21:30:00 EDT tag:www.anandtech.com,21374:news
The AlphaCool Core Ocean T38 360mm AIO CPU Cooler Review: Loud and Proud E. Fylladitakis While the all-in-one CPU cooler industry is dominated, at least in mindshare, by flagship coolers from the industry’s biggest brands, the market segment overall has grown over the years to cover a much larger gamut of users. From flagship coolers to sub-$100 specials, effective AIO coolers have become available and affordable for most mid-range and higher builds. Thanks in part to some intensive competition in this space, we’ve seen several vendors bring down even 360mm coolers to the sub-$100 market in an effort to get in an edge over their competitors, and a sale in the process.

Looking at an opportunity to grow their own customer base, even the normally premium-focused AlphaCool has opted to get into this action with their Core Ocean lineup of coolers. And today, we're taking a closer look at the Core Ocean T38 360mm, AlphaCool's latest entry-level AIO cooler.

At a high level, the Core Ocean T38 has been designed to balance performance with manufacturing costs, allowing the company to put together an effective cooler that can still be priced low enough to reach budget-conscious consumers. Sticking with an aluminum radiator and keeping the frills such as RGB lighting to a minimum, the T38 is primarily aimed at system builders who require straightforward, effective cooling solutions – and without the complexity of AlphaCool's renowned open-loop custom liquid cooling kits. As we'll see, nothing comes for free, but AlphaCool has been able to put together a rather effective CPU cooler for $100 that's hard to ignore.

]]>
https://www.anandtech.com/show/21351/the-alphacool-core-ocean-t38-360mm-aio-cpu-cooler-review Mon, 29 Apr 2024 09:30:00 EDT tag:www.anandtech.com,21351:news
TSMC Jumps Into Silicon Photonics, Lays Out Roadmap For 12.8 Tbps COUPE On-Package Interconnect Anton Shilov

Optical connectivity – and especially silicon photonics – is expected to become a crucial technology to enable connectivity for next-generation datacenters, particularly those designed HPC applications. With ever-increasing bandwidth requirements needed to keep up with (and keep scaling out) system performance, copper signaling alone won't be enough to keep up. To that end, several companies are developing silicon photonics solutions, including fab providers like TSMC, who this week outlined its 3D Optical Engine roadmap as part of its 2024 North American Technology Symposium, laying out its plan to bring up to 12.8 Tbps optical connectivity to TSMC-fabbed processors.

TSMC's Compact Universal Photonic Engine (COUPE) stacks an electronics integrated circuit on photonic integrated circuit (EIC-on-PIC) using the company's SoIC-X packaging technology. The foundry says that usage of its SoIC-X enables the lowest impedance at the die-to-die interface and therefore the highest energy efficiency. The EIC itself is produced at a 65nm-class process technology.

TSMC's 1st Generation 3D Optical Engine (or COUPE) will be integrated into an OSFP pluggable device running at 1.6 Tbps. That's a transfer rate well ahead of current copper Ethernet standards – which top out at 800 Gbps – underscoring the immediate bandwidth advantage of optical interconnects for heavily-networked compute clusters, never mind the expected power savings.

Looking further ahead, the 2nd Generation of COUPE is designed to integrate into CoWoS packaging as co-packaged optics with a switch, allowing optical interconnections to be brought to the motherboard level. This version COUPE will support data transfer rates of up to 6.40 Tbps with reduced latency compared to the first version.

TSMC's third iteration of COUPE – COUPE running on a CoWoS interposer – is projected to improve on things one step further, increasing transfer rates to 12.8 Tbps while bringing optical connectivity even closer to the processor itself. At present, COUPE-on-CoWoS is in the pathfinding stage of development and TSMC does not have a target date set.

Ultimately, unlike many of its industry peers, TSMC has not participated in the silicon photonics market up until now, leaving this to players like GlobalFoundries. But with its 3D Optical Engine Strategy, the company will enter this important market as it looks to make up for lost time.

Related Reading

]]>
https://www.anandtech.com/show/21373/tsmc-adds-silicon-photonics-coupe-roadmap-128tbps-on-package Fri, 26 Apr 2024 16:00:00 EDT tag:www.anandtech.com,21373:news
TSMC's System-on-Wafer Platform Goes 3D: CoW-SoW Stacks Up the Chips Anton Shilov

TSMC has been offering its System-on-Wafer integration technology, InFO-SoW, since 2020. For now, only Cerebras and Tesla have developed wafer scale processor designs using it, as while they have fantastic performance and power efficiency, wafer-scale processors are extremely complex to develop and produce. But TSMC believes that not only will wafer-scale designs ramp up in usage, but that megatrends like AI and HPC will call for even more complex solutions: vertically stacked system-on-wafer designs.

Tesla Dojo's wafer-scale processors — the first solutions based based on TSMC's InFO-SoW technology that are in mass production — have a number of benefits over typical system-in-packages (SiPs), including low-latency high-bandwidth core-to-core communications, very high performance and bandwidth density, relatively low power delivery network impendance, high performance efficiency, and redunancy.

But with InFO-SoW and other wafer scale integration methods, processor designers have to rely solely on on-chip memory. This is perfectly adequate for many applications, but it may not be enough for next-generation AI workloads. Furthermore, with InFO-SoW, the whole wafer has to be processed using one fabrication technology, which may not be optimal, or too expensive for certain designs.

So, with its next-generation system-on-wafer platform, TSMC plans to bring together two of its packaging technologies: InFO-SoW and System on Integrated Chips (SoIC), which will allow it to stack memory or logic on top of a system-on-wafer using its Chip-on-Wafer (CoW) method. The CoW-SoW technology, which the company announced at its North American Technology Symposium, will be ready for mass production in 2027.

For now, TSMC is mostly talking about wedding wafer scale processors with HBM4 memory. And given that HBM4 stacks will feature a 2048-bit interface, its tighter integration with logic is something that the industry is considering.

"So, in the future, using wafer level integrations [will allow] our customers to integrate even more logic and memory together," said Kevin Zhang, Vice President of Business Development at TSMC. "SoW is no longer a fiction, this is something we already work with our customers [on] to produce some of the products already in place. This we think by leveraging our advanced wafer level integration technology, we can provide our customer a very important the path allow them to continue to grow their capability to bring in more computation, more energy efficient computation, to their AI cluster or [supercomputer]."

Related Reading

]]>
https://www.anandtech.com/show/21372/tsmcs-system-on-wafer-platform-goes-3d-cow-sow Fri, 26 Apr 2024 08:00:00 EDT tag:www.anandtech.com,21372:news
TSMC Preps Cheaper 4nm N4C Process For 2025, Aiming For 8.5% Cost Reduction Anton Shilov

While the bulk of attention on TSMC is aimed at its leading-edge nodes, such as N3E and N2, loads of chips will continue to be made using more mature and proven process technologies for years to come. Which is why TSMC has continued to refine its existing nodes, including its current-generation 5nm-class offerings. To that end, at its North American Technology Symposium 2024, the company introduced a new, optimized 5nm-class node: N4C.

TSMC's N4C process belongs to the company's 5nm-class family of fab nodes and is a superset of N4P, the most advanced technology in that family. In a bid to further bring down 5nm manufacturing costs, for TSMC is implementing several changes for N4C, including rearchitecting their standard cell and SRAM cell, changing some design rules, and reducing the number of masking layers. As a result of these improvements, the company expects N4C to achieve both smaller die sizes as well as a reduction in production complexity, which in turn will bring die costs down by up to 8.5%. Furthermore, with the same wafer-level defect density rate as N4P, N4C stands to offer even higher functional yields thanks to its die area reduction.

"So, we are not done with our 5nm and 4nm [technologies]," said Kevin Zhang, Vice President of Business Development at TSMC. "From N5 to N4, we have achieved 4% density improvement optical shrink, and we continue to enhance the transistor performance. Now we bring in N4C to our 4 nm technology portfolio. N4C allows our customers to reduce their costs by remove some of the masks and to also improve on the original IP design like a standard cell and SRAM to further reduce the overall product level cost of ownership."

TSMC says that N4C can use the same design infrastructure as N4P, though it is unclear whether N5 and N4P IP can be re-used for N4C-based chips. Meanwhile, TSMC indicates that it offers various options for chipmakers to find the right balance between cost benefits and design effort, so companies interested in adopting a 4nm-class process technologies could well adopt N4C.

The development of N4C comes as many of TSMC's chip design customers are preparing to launch chips based on the company's final generation of FinFET process technology, the 3nm N3 series. While N3 is expected to be a successful family, the high costs of N3B have been an issue, and the generation is marked by diminishing performance and transistor density returns altogether. Consequently, N4C could well become a major, long-lived node at TSMC, serving as a good fit for customers who want to stick to a more cost-effective FinFET node.

"This is a very significant enhancement, we are working with our customer, basically to extract more value from their 4 nm investment," Zhang said.

TSMC expects to start volume production of N4C chips some time next year. And with TSMC having produced 5nm-class for nearly half a decade at that point, N4C should be able to hit the ground running in terms of volume and yields.

Related Reading

]]>
https://www.anandtech.com/show/21371/tsmc-preps-lower-cost-4nm-n4c-process-for-2025 Thu, 25 Apr 2024 10:00:00 EDT tag:www.anandtech.com,21371:news
TSMC 2nm Update: N2 In 2025, N2P Loses Backside Power, and NanoFlex Brings Optimal Cells Anton Shilov

Taiwan Semiconductor Manufacturing Co. provided several important updates about its upcoming process technologies at its North American Technology Symposium 2024. At a high level, TSMC's 2 nm plans remain largely unchanged: the company is on track to start volume production of chips on it's first-generation GAAFET N2 node in the second half of 2025, and N2P will succeed N2 in late 2026 – albeit without the previously-announced backside power delivery capabilities. Meanwhile, the whole N2 family will be adding TSMC's new NanoFlex capability, which allows chip designers to mix and match cells from different libraries to optimize performance, power, and area (PPA). 

One of the key announcements of the event is TSMC's NanoFlex technology, which will be a part of the company's complete N2 family of production nodes (2 nm-class, N2, N2P, N2X). NanoFlex will enable chip designers to mix and match cells from different libraries (high performance, low power, area efficient) within the same block design, allowing designers to fine tune their chip designs to improve performance or lower power consumption.

TSMC's contemporary N3 fabrication process already supports a similar capability called FinFlex, which also allows designers to use cells from different libraries. But since N2 relies on gate-all-around (GAAFET) nanosheet transistors, NanoFlex gives TSMC some additional controls: firstly, TSMC can optimize channel width for performance and power and then build short cells (for area and power efficiency) or tall cells (for up to 15% higher performance).  

When it comes to timing, TSMC's N2 is set to enter risk production in 2025 and high-volume manufacturing (HVM) in the second half of 2025, so it looks like we are going to see N2 chips in retail devices in 2026. Compared to N3E, TSMC expects N2 to increase performance by 10% to 15% at the same power, or reduce power consumption by 25% to 30% at the same frequency and complexity. As for chip density, the foundry is looking at a 15% density increase, which is a good degree of scaling by contemporary standards.

N2 will be followed by performance-enhanced N2P, as well as the voltage-enhanced N2X in 2026. Although TSMC once said that N2P would add backside power delivery network (BSPDN) in 2026, it looks like this will not be the case and N2P will use regular power delivery circuitry. The reason for this is unclear, but it looks like the company decided not to add a costly feature to N2P, but to reserve it to its next-generation node, which will also be available to customers in late 2026.

N2 is still expected to feature a major innovation related to power: super-high-performance metal-insulator-metal (SHPMIM) capacitors, which are are being added to improve power supply stability. The SHPMIM capacitor offers more than twice the capacity density of TSMC's existing super-high-density metal-insulator-metal (SHDMIM) capacitor. Additionally, the new SHPMIM capacitor cuts sheet resistance (Rs in Ohm/square) and via resistance (Rc) by 50% compared to its predecessor.

Related Reading

]]>
https://www.anandtech.com/show/21370/tsmc-2nm-update-n2-in-2025-n2p-loses-bspdn-nanoflex-optimizations Thu, 25 Apr 2024 08:30:00 EDT tag:www.anandtech.com,21370:news
TSMC's 1.6nm Technology Announced for Late 2026: A16 with "Super Power Rail" Backside Power Anton Shilov With the arrival of spring comes showers, flowers, and in the technology industry, TSMC's annual technology symposium series. With customers spread all around the world, the Taiwanese pure play foundry has adopted an interesting strategy for updating its customers on its fab plans, holding a series of symposiums from Silicon Valley to Shanghai. Kicking off the series every year – and giving us our first real look at TSMC's updated foundry plans for the coming years – is the Santa Clara stop, where yesterday the company has detailed several new technologies, ranging from more advanced lithography processes to massive, wafer-scale chip packing options.

Today we're publishing several stories based on TSMC's different offerings, starting with TSMC's marquee announcement: their A16 process node. Meanwhile, for the rest of our symposium stories, please be sure to check out the related reading below, and check back for additional stories.

Headlining its Silicon Valley stop, TSMC announced its first 'angstrom-class' process technology: A16. Following a production schedule shift that has seen backside power delivery network technology (BSPDN) removed from TSMC's N2P node, the new 1.6nm-class production node will now be the first process to introduce BSPDN to TSMC's chipmaking repertoire. With the addition of backside power capabilities and other improvements, TSMC expects A16 to offer significantly improved performance and energy efficiency compared to TSMC's N2P fabrication process. It will be available to TSMC's clients starting H2 2026.

TSMC A16: Combining GAAFET With Backside Power Delivery

At a high level, TSMC's A16 process technology will rely on gate-all-around (GAAFET) nanosheet transistors and will feature a backside power rail, which will both improve power delivery and moderately increase transistor density. Compared to TSMC's N2P fabrication process, A16 is expected to offer a performance improvement of 8% to 10% at the same voltage and complexity, or a 15% to 20% reduction in power consumption at the same frequency and transistor count. TSMC is not listing detailed density parameters this far out, but the company says that chip density will increase by 1.07x to 1.10x – keeping in mind that transistor density heavily depends on the type and libraries of transistors used.

The key innovation of TSMC's A16 node, is its Super Power Rail (SPR) backside power delivery network, a first for TSMC. The contract chipmaker claims that A16's SPR is specifically tailored for high-performance computing products that feature both complex signal routes and dense power circuitry.

As noted earlier, with this week's announcement, A16 has now become the launch vehicle for backside power delivery at TSMC. The company was initially slated to offer BSPDN technology with N2P in 2026, but for reasons that aren't entirely clear, the tech has been punted from N2P and moved to A16. TSMC's official timing for N2P in 2023 was always a bit loose, so it's hard to say if this represents much of a practical delay for BSPDN at TSMC. But at the same time, it's important to underscore that A16 isn't just N2P renamed, but rather it will be a distinct technology from N2P.

TSMC is not the only fab pursuing backside power delivery, and accordingly, we're seeing multiple variations on the technique crop up at different fabs. The overall industry has three approaches for BSPDN: Imec's Buried Power Rail, Intel's PowerVia, and now TSMC's Super Power Rail.

The oldest technique, Imec's Buried Power Rail, essentially places power delivery network on the backside of the wafer and then connects power rail of logic cells to power contact using nano TSVs. This enables some area scaling and does not add too much complexity to production. The second implementation, Intel's PowerVia, connects power to the cell or transistor contact, which provides a better result, but at the cost of complexity.

Finally, we have TSMC's new Super Power Rail BSPDN technology, which connects a backside power network directly to each transistor's source and drain. According to TSMC, this is the most efficient technology in terms of area scaling, but the trade-off is that it's the most complex (and expensive) when it comes to production.

That TSMC has opted to go with the most complex version of BSPDN may be part of the reason that we've seen it removed from N2P, as implementing it will ultimately add to both time and costs. This leaves A16 as TSMC's premiere performance node for the 2026/2027 time-frame, while N2P can be positioned to offer a more balanced combination of performance and cost efficiency.

Angstrom Era Kicks Off In Late 2026 With New Node Naming Convention

Finally, as with Intel, we're also seeing TSMC adopt a new process node naming convention starting with this generation of technology. The name itself is largely arbitrary – and this has already been the case in the fab industry for several years now – but with current node names already in the single digits (e.g. N2), the industry has needed to re-calibrate node names to something smaller than the nanometer. And thus we've arrived at the 'angstrom era.' But regardless of what exactly it's called or why it's called that, the important point is that A16 will be the next generation node beyond TSMC's 2nm-class products.

TSMC expects to start volume production on A16 in H2 2026, so it is likely that the first products based on this technology will hit the market in 2027. Given the timing, the production node will presumably compete against Intel's 14A; though at 2+ years out and with no one producing BSPDN in volume today, there's still a lot of time for plans and roadmaps to change.

]]>
https://www.anandtech.com/show/21369/tsmcs-16nm-technology-announced-for-late-2026-a16-with-super-power-rail-bspdn Thu, 25 Apr 2024 07:30:00 EDT tag:www.anandtech.com,21369:news
Report: Seagate, Western Digital Hike HDD Prices Amid Surge In Demand Anton Shilov

Seagate Technology has reportedly notified its customers abouts its plans to raise prices on new hard drive orders and for demands that exceed prior agreements, echoing a similar move by Western Digital, which increased its prices earlier this month. These changes come in response to a surge in demand for high-capacity HDDs and constraints in supply due to decreased production capabilities of both Seagate and Western Digital, reports TrendForce.

According to industry insights reported by TechNews, the sector anticipates that the scarcity of high-capacity HDD products will persist throughout the current quarter and possibly extend over the entire year. It is forecasted that HDD prices will rise by 5% to 10% in Q2 2024 alone and could increase further as a reault of the ongoing challenges faced by the storage industry.

The primary driver behind Seagate's decision is increased demand for high-capacity HDDs, which are used to train AI models. This demand spike, coupled with a reduction in production output from hard drive makers, has created a significant supply-demand imbalance. As a result, Seagate has decided to adjust their pricing strategy to manage the situation. Further exacerbating the issue are global inflationary pressures which continue to inflate costs across the board, which also contributed to the company's decision to increase prices, Seagate said in a message to clients published by TrendForce.

Seagate emphasized that its reduced production capacity has been a major challenge, hindering the company's ability to fulfill customer demands fully and promptly.

"As a result, we will be implementing price increases effective immediately on new orders and for demand that is over and above previously committed volumes," the alleged memo from Seagate reads. "Supply constraints are expected to continue and as such we anticipate that prices will continue to increase in the coming quarters."

Earlier this month Western Digital also informed its customers about price hikes for its HDD and SSD products. This notification was based on similar issues — higher than anticipated demand across the whole product range and additional supply chain challenges affecting the electronics sector. Western Digital's announcement made it clear that these disruptions are likely to continue, prompting further price adjustments.

Sources: TrendForce, TrendForce, TechNews

]]>
https://www.anandtech.com/show/21368/report-seagate-western-digital-hike-hdd-prices Wed, 24 Apr 2024 12:00:00 EDT tag:www.anandtech.com,21368:news