GDDR7 video memory standards announced, first graphics card exposed
As the AI server market heats up, related memory standards and products are also quickly catching up.
Today, JEDEC officially announced GDDR7 as the next-generation memory standard, with both AMD and NVIDIA (NVIDIA) joining in.
Currently, Samsung and Micron have both formulated development plans for the next-generation GDDR7 memory modules. Samsung aims to achieve a speed of 32Gbps, while Micron targets chips with 24Gb+32Gbps. Micron also revealed in its latest roadmap the goal of reaching 36 Gbps and 24 Gb+ memory modules by 2026.
From the published standard, the bandwidth of JESD239 GDDR7 is twice that of GDDR6 (with the number of independent channels doubled to 4), with each device capable of reaching 192 GB/s, supporting densities from 16Gbit to 32Gbit, including support for dual-channel mode, which doubles the system capacity.
Advertisement
JEDEC stated: "JESD239 GDDR7 marks a significant advancement in high-speed memory design. With the transition to pulse amplitude modulation (PAM3) signaling, the memory industry has a new path to expand the performance of GDDR devices and drive the continuous development of graphics and various high-performance applications, focusing not only on bandwidth but also integrating the latest data integrity features to meet the RAS market demands for GDDR."
From the current situation, the flagship card of the RTX 50 series—most likely to be called the RTX 5090—will likely debut with the new generation of memory GDDR7. According to leaks, the performance improvement of the new card is very aggressive, nearly doubling the performance of the previous generation.
Based on the previously exposed information about Samsung GDDR7, even with a 256-bit bus width, GDDR7 can provide a bandwidth of 1.18TB/s at a high frequency of 37GHz, surpassing the 384-bit 24GHz GDDR6.If paired with a 384-bit bus width, the bandwidth of 32GHz GDDR7 would be almost 1.8TB/s, leading the RTX 4090 by a full 80% and nearly double that of the RX 7900 XTX. From the current perspective, AMD will have almost no direct counterattack capability against Nvidia's flagship cards like the RTX 5090. Nvidia's next-generation Blackwell architecture is expected to use GDDR7 at launch. We might get the data center version of Blackwell by the end of 2024, but it will use HBM3E memory instead of GDDR7. Consumer products are likely to arrive in early 2025, as usual, with professional and data center variants of these components. AMD is also developing RDNA 4, which we expect to use GDDR7 as well, although don't be surprised if both companies' low-end parts still opt to stick with GDDR6 for cost reasons.
In either case, AMD or Nvidia using GDDR7 at top speeds could potentially offer up to 2,304GB/s of bandwidth with today's widest 384-bit interface. Will we really see such bandwidth? Perhaps not, for instance, Nvidia's RTX 6 series GPUs with GDDR40X all use clocks slightly below the maximum.
In July 2023, Samsung revealed the development of the industry's first GDDR7 memory chip. The new device is said to have a data transfer rate of 32GT/s, use PAM3 signaling, and promise a 20% improvement in energy efficiency over GDDR6. To achieve this goal, Samsung had to implement several new technologies.
Samsung's first 16Gb GDDR7 device has a data transfer rate of 32 GT/s, thus offering a bandwidth of 128GB/s, far exceeding the 89.6 GB/s per chip provided by GDDR6X at 22.4GT/s. From this perspective, a 384-bit memory subsystem equipped with 32GT/s GDDR7 chips would offer up to 1.536TB/s of bandwidth, significantly surpassing the 1.008 TB/s of the GeForce RTX 4090.
To achieve the unprecedented high data transfer rate, GDDR7 uses PAM3 signaling, a pulse amplitude modulation technique with three different signaling levels (-1, 0, and +1). This mechanism can transmit three bits of data within two cycles, which is more efficient than the two-level NRZ method used by GDDR6. However, it should be noted that the generation and decoding of PAM3 signals are more complex than NRZ signals (which means additional power consumption), and they are more susceptible to noise and interference. At the same time, it appears that the advantages of PAM3 outweigh its challenges, hence its adoption by GDDR7.
In addition to higher performance, it is said that Samsung's 32GT/s GDDR7 chips offer a 20% improvement in energy efficiency compared to 24 GT/s GDDR6, but Samsung did not specify how it measures energy efficiency. Typically, memory manufacturers tend to measure the power per transferred bit.
This does not mean that the power consumption of GDDR7 memory chips and GDDR7 memory controllers will be lower than that of GDDR6 ICs and controllers. PAM3 encoding/decoding is more complex and requires more power. In fact, Samsung even stated that it used epoxy mold compound (EMC) with a high thermal conductivity and 70% lower thermal resistance for GDDR7 packaging to ensure that the active components (the IC itself) do not overheat, indicating that GDDR7 memory devices are hotter than GDDR6 memory devices, especially when operating at high clocks.
It is also worth noting that Samsung's GDDR7 components will offer low operating voltage options for applications such as laptops, but the company did not disclose what kind of performance we should expect from such devices.Samsung's announcement did not disclose when it plans to begin mass production of its GDDR7 components, nor did it reveal which process technology will be used. Given the pace at which AMD and NVIDIA release new GPU architectures every two years, it is logical to expect the next-generation graphics processors to hit the market in 2024, and they are more likely to adopt GDDR7.
In the meantime, Samsung anticipates that artificial intelligence, high-performance computing, and automotive applications will also leverage GDDR7, so some kind of AI or HPC ASIC might adopt GDDR7 before the GPU.
Yongcheol Bae, Executive Vice President of Samsung Electronics' Memory Product Planning Team, stated: "Our GDDR7 DRAM will help enhance user experiences in workstations, PCs, and gaming consoles, and is expected to expand into future applications such as artificial intelligence, high-performance computing (HPC), and automotive. The next-generation graphics DRAM will be brought to market based on industry demand, and we plan to continue to lead in this field."
*Disclaimer: This article is the original creation of the author. The content of the article represents their personal views, and our reposting is solely for sharing and discussion, not an endorsement or agreement. If you have any objections, please contact the backend.