On March 22, chip design giant Nvidia announced the Nvidia Hopper H100 chip with a new architecture at its GTC 2022 conference, which is mainly dedicated to powering AI, self-driving cars, Metaverse tools and digital products to further accelerate Exploratory skills in graphics, scientific computing and AI.
Nvidia releases Hopper H100 AI chip
As the "successor" to the Nvidia A100, the new Hopper H100 AI chip is named after computer science pioneer Grace Hopper The key programming tool of the "compiler", developed the code for the COBOL programming language, and coined the word "bug"), while adhering to the doubling advantage of the Nvidia architecture, it also gave more "super" capabilities .
In the company's NVLink high-speed communication channel, customers can link up to 256 H100 chips to "essentially one mind-blowing GPU," Nvidia founder and CEO Jen-Hsun Huang shared at the conference.
According to reports, the new Hopper H100 chip is made by TSMC's 5nm customized version process (4N), and its data processing circuit is composed of up to 80 billion transistors. It has an IO bandwidth of 40 terabytes per second. It is the world's first PCI-E 5 and HBM. 3 graphics cards.
In terms of computing power, the FP8 computing power of the new Hopper H100 has reached 4PetaFLOPS, FP16 is 2PetaFLOPS, TF32 computing power is 1PetaFLOPS, and FP64 and FP32 computing power is 60TeraFLOPS.
Compared with the 400W power of the previous generation A100, the power of the new Hopper H100 is as high as 700W, and the AI FP8 precision computing power is 6 times that of the FP16 on the A100, achieving NVIDIA's "biggest performance improvement in history".
It is reported that Nvidia has plans to package the Hopper H100 into its DGX compute modules, which can be connected to larger systems called SuperPads. Since an early DGX customer was Meta (previously Facebook, and Meta had a new giant AI supercomputer used to build the Metaverse), Nvidia is looking to outdo it with its own DGX SuperPod system, Eos.
According to Jen-Hsun Huang, the Hopper H100 GPU will ship in the third quarter, while Grace is "expected to ship next year."
In addition to the Hopper H100, NVIDIA also introduced a new member of its Ampere family of graphics chips at the conference, the RTX A5500, which is used for 3D tasks such as animation, product design and visual data processing, mainly for professionals who require graphics capabilities. The chip's launch also coincides with Nvidia's exploration of the Omniverse, the tools and cloud computing services needed to build the 3D realm of the metaverse.
Chip "Battle"
For the arrival of the new Hopper H100, some media commented that "I don't know if it will put pressure on many competitors", such as Intel's upcoming Ponte Vecchio processor (the processor has more than 100 billion transistors), and Apple's M1 Chips for a range of purpose-built AI accelerators from startups such as Ultra, Graphcore, SambaNova Systems and Cebranas.
Since the Hopper H100 chip will also target the self-driving car space, one notable competitor to Nvidia is Tesla (whose D1 chip powers its Dojo technology to train self-driving cars). Previously, the Tesla manufacturer has also said that Dojo will replace Nvidia chips when it is put into use.
In terms of chips, NVIDIA may not be as well-known as Intel and Apple, but in terms of practicality of new-generation technologies, NVIDIA should not be underestimated. For example, its exploration of the Omniverse has spanned multiple fields, including collaborative 3D design through cloud and digital twin technologies, which have reflected the real-world part of computing systems.
Despite the cloud of adversaries, Nvidia has continued to forge ahead and has made progress. Nvidia plans to launch its next-generation Hyperion automotive chip family in 2026, and expects to generate $11 billion in profit from automotive chips over the next six years. As for the further development and exploration in the metaverse, it is also more worthy of our expectation.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。