NVIDIA China's special supply chip H20 order 'suspicious cloud'
The rumor that Nvidia's (NVDA. US) Chinese special supply chip H20 may be discontinued has been circulating in the industry in recent months, and the fate of this chip has once again become a focus of attention. Recently, some distributors told Caixin reporters that they are no longer able to place orders for H20 chips, and some terminal manufacturers have stated that some domestic distributors are no longer accepting H20 orders. Dealer Wang Yu told Caixin reporters that Nvidia started not accepting H20 orders last month, but there was no written notice. To be precise, I am waiting for the latest news. The production schedule is estimated to be delayed. Although there is no clear instruction, it is basically like this. Perhaps some customized needs of the customer can still be met Terminal manufacturer Li Xiaolin also told Caixin reporters, "We need to go through distributors to place orders with Nvidia. Recently, several distributors have stated that they will no longer accept orders for H20, while others have indicated that they have stock available to take orders However, when asked about H20 by a Caixin reporter, channel merchant Li Ming said, "I can still take it here Previously, a reporter from Caixin News Agency sent an interview outline to Nvidia China email to verify whether H20 has stopped accepting orders, but received no response. It is worth mentioning that Nvidia responded to the media's "no comment" regarding the content of the report by Caixin on the evening of September 20th. Affected by the aforementioned news, Nvidia's stock price fluctuated last Friday, opening down 2 points and closing down 1.59% at $116.00. NVIDIA seized the opportunity in the wave of big models and consolidated its dominant position. In 2023, due to policy restrictions, Nvidia launched three special supply chips based on H100 to China, among which H20 has the strongest performance. Li Xiaolin revealed that Nvidia began accepting H20 chip orders in February this year, officially began shipping in April, and only shipped in large quantities in May. However, Caixin reporters noticed that several practitioners, including Wang Yu, used the term "chicken ribs" to describe the H20 chip. According to public information, the Nvidia H20 chip was launched to comply with US export restrictions and is an AI accelerator specifically designed for the Chinese market. H20 is based on the NVIDIA Hopper architecture and has 96GB of HBM3 memory, providing a memory bandwidth of 4.0TB/s. In terms of computing power, the FP8 performance of H20 is 296 TFLOPS, and the FP16 performance is 148 TFLOPS. Compared to H100, its GPU core count has decreased by 41% and performance has decreased by 28%. Several industry professionals also told Caixin reporters that the attention to H20 chips is actually relatively low. Zhang Na, who is close to multiple large model manufacturers, mentioned, "I feel that the market attention for computing power leasing is relatively low now, and the H20 is not so 'explicit'." Li Xiaolin also told reporters, "The H20 price is not low, and it still needs to be sold for more than one million yuan. Previously, big factories were hoarding goods, and small factories may have demand
Semiconductor company NEO Semiconductor recently announced the launch of 3D X-AI chip technology
On August 18th, semiconductor company NEO Semiconductor announced the launch of 3D X-AI chip technology, designed to replace existing DRAM chips in high bandwidth memory (HBM) to enhance artificial intelligence processing performance and significantly reduce energy consumption. The 3D X-AI chip integrates 8000 neuron circuits, which directly perform AI processing tasks in 3D DRAM, achieving AI performance acceleration of up to 100 times. At the same time, due to the significant reduction in data transmission requirements, the power consumption of the chip has been reduced by 99%, effectively reducing the power consumption and heating problems of the data bus. NEO Semiconductor's innovation also brings 8 times the memory density, and its 3D X-AI chip contains 300 DRAM layers, supporting the operation of larger scale AI models. The company has previously announced the world's first 3D DRAM technology, and the 3D X-AI chip is a further innovation based on this, achieving an AI processing throughput of up to 10 TB/s per chip through a stacked packaging similar to HBM. NEO Semiconductor founder and CEO Andy Hsu pointed out that the separation of data storage and processing in current AI chip architectures has led to performance bottlenecks and high power consumption issues. The 3D X-AI chip significantly reduces the amount of data transmitted between HBM and GPU by performing AI processing in each HBM chip, thereby improving performance and reducing power consumption. Industry analyst Jay Kramer believes that the application of 3D X-AI technology will accelerate the development of emerging AI use cases and drive the creation of new use cases, opening a new era for AI application innovation.
NEO Semiconductor's innovation also brings 8 times the memory density
On August 18th, semiconductor company NEO Semiconductor announced the launch of 3D X-AI chip technology, designed to replace existing DRAM chips in high bandwidth memory (HBM) to enhance artificial intelligence processing performance and significantly reduce energy consumption. The 3D X-AI chip integrates 8000 neuron circuits, which directly perform AI processing tasks in 3D DRAM, achieving AI performance acceleration of up to 100 times. At the same time, due to the significant reduction in data transmission requirements, the power consumption of the chip has been reduced by 99%, effectively reducing the power consumption and heating problems of the data bus. NEO Semiconductor's innovation also brings 8 times the memory density, and its 3D X-AI chip contains 300 DRAM layers, supporting the operation of larger scale AI models. The company has previously announced the world's first 3D DRAM technology, and the 3D X-AI chip is a further innovation based on this, achieving an AI processing throughput of up to 10 TB/s per chip through a stacked packaging similar to HBM. NEO Semiconductor founder and CEO Andy Hsu pointed out that the separation of data storage and processing in current AI chip architectures has led to performance bottlenecks and high power consumption issues. The 3D X-AI chip significantly reduces the amount of data transmitted between HBM and GPU by performing AI processing in each HBM chip, thereby improving performance and reducing power consumption. Industry analyst Jay Kramer believes that the application of 3D X-AI technology will accelerate the development of emerging AI use cases and drive the creation of new use cases, opening a new era for AI application innovation.
Consultation Hotline:188-2283-2453 / 186-8884-1934
QQ:564579289
scan