GDDR 7, officially released.

According to a report by BusinessWire, JEDEC has released the GDDR7 memory standard specification. The next-generation memory will be used for graphics cards, with AMD, Micron, Nvidia, Samsung, and SK Hynix all participating in the development. It is anticipated that GDDR7 will become the preferred memory for high-end RDNA 4 and Blackwell GPUs, which are rumored to be launched next year and compete for a spot on the list of the best graphics cards.

It has been nearly six years since the first graphics cards began to support GDDR6 memory. This was with Nvidia's launch of the Turing architecture RTX 20 series in September 2018. The first RTX 2080 and RTX 2080 Ti GPUs with GDDR6 had a memory clock speed of 14 Gbps (14 GT/s), providing a speed of 56 GB/s per device. Later solutions, such as AMD's RX 7900 XTX, have clock speeds of up to 20 Gbps, with a speed of 80 GB/s.

Advertisement

Nvidia helped create a faster alternative, GDDR6X, with speeds of 19 Gbps in the RTX 3080, and ultimately up to 23 Gbps in the latest RTX 4080 Super. Officially, Micron's GDDR6X chips have a speed of up to 24 Gbps, with a rate of up to 96 GB/s per device.

GDDR7 will significantly increase bandwidth. The JEDEC specification will ultimately reach 192 GB/s per device. The calculated memory speed is 48 Gbps, twice as fast as the fastest GDDR6X. However, it reaches this speed differently from previous memory solutions.

GDDR7 will use three-level signaling (-1, 0, +1) to transfer three bits of data per two cycles. This is a variation of the NRZ (non-return-to-zero) signaling used in GDDR6, which transfers two bits in two cycles. This change alone increases data transfer efficiency by 50%, meaning the base clock does not have to be twice as fast as GDDR6.

Other changes include the use of a core-independent linear feedback shift register training mode to improve accuracy and reduce training time. The number of independent channels in GDDR7 will double (from 4 in GDDR6 to 2 in GDDR6), and PAM3 signaling will be used.

None of this is new information, as Samsung revealed many key GDDR7 details in July last year. However, the release of the JEDEC standard marks an important milestone and indicates that the public availability and use of GDDR7 solutions are imminent (relatively speaking).

Nvidia's next-generation Blackwell architecture is expected to use GDDR7 when it is launched. We may see the data center version of Blackwell at the end of 2024, but it will use HBM3E memory instead of GDDR7. Consumer products are likely to be available at the beginning of 2025, and as usual, these parts will have professional and data center versions. AMD is also developing RDNA 4, and we expect it to use GDDR7 as well - however, do not be surprised if both companies' low-end parts still choose to stick with GDDR6 for cost reasons.

In either case, AMD or Nvidia using GDDR7 at the highest speeds may use today's widest 384-bit interface to provide up to 2,304 GB/s of bandwidth. Will we really see such bandwidth? Maybe not, for example, Nvidia's RTX 40 series GPUs (equipped with GDDR6X) all use clocks slightly below the maximum clock. Nevertheless, we can still easily see the bandwidth of the upcoming architecture doubled.

When will these really arrive? We do not rule out the possibility of a launch at the end of 2024. Nvidia's RTX 30 series was launched in the fall of 2020, and the RTX 40 series was launched in the fall of 2022. AMD's RX 6000 series was also launched at the end of 2020, and the RX 7000 series was launched at the end of 2022. If the same two-year rhythm is maintained, we may see GDDR7 graphics cards by the end of the year. But do not hold out too much hope, as we still think the possibility of early 2025 is greater.JEDEC Releases GDDR7 Graphics Memory Standard

The global leader in setting microelectronics industry standards, JEDEC Solid State Technology Association, is pleased to announce the release of JESD239 Graphics Double Data Rate (GDDR7) SGRAM. JESD239 GDDR7 provides twice the bandwidth of GDDR6, with bandwidth up to 192 GB/s per device, meeting the growing demand for more memory bandwidth in graphics, gaming, computing, networking, and AI applications.

JESD239 GDDR7 is the first JEDEC standard DRAM to operate at high frequencies using Pulse Amplitude Modulation (PAM) interface. Its PAM3 interface improves the signal-to-noise ratio (SNR) at high frequencies while enhancing energy efficiency. By using 3 levels (+1, 0, -1) to transmit 3 bits within 2 cycles, instead of the traditional NRZ (non-return-to-zero) interface transmitting 2 bits within 2 cycles, PAM3 offers a higher data transfer rate, thereby improving performance.

Other advanced features include:

A core-independent LFSR (Linear Feedback Shift Register) training mode with eye mask and error counter to improve training accuracy while reducing training time;

Doubling the number of independent channels, from 2 in GDDR6 to 4 in GDDR7;

Support for 16 Gbit to 32 Gbit densities, including support for 2-channel mode to double system capacity;

Meeting the market demand for RAS (Reliability, Availability, Serviceability) by integrating the latest data integrity features, including on-die ECC (ODECC) with real-time reporting, data poisoning, error checking and scrubbing, and command address parity (CAPARBLK) with command blocking;

JEDEC Board Chairman Mian Quddus said, "JESD239 GDDR7 marks a significant advancement in high-speed memory design." With the transition to PAM3 signaling, the memory industry has a new path to expand the performance of GDDR devices and drive the continuous development of graphics and various high-performance applications."

"GDDR7 is the first GDDR not only focusing on bandwidth but also meeting the RAS market demand by integrating the latest data integrity features, enabling GDDR devices to better serve existing markets such as cloud gaming and computing, and to expand into AI," said JEDEC GDDR Subcommittee Chairman Michael Litt.AMD's Chief Technology Officer for Computing and Graphics and Corporate Fellow, Joe Macri, said: "The groundbreaking GDDR7 memory standard introduced today represents a key step in unleashing the potential of next-generation consumer, gaming, commercial, and enterprise devices." "By harnessing the transformative power of GDDR7, we can together usher in a new era of transformative computing and graphics possibilities, paving the way for a future shaped by innovation and discovery."

"Micron has a long history of defining graphics DRAM standards through JEDEC, and has played a key role in driving the standardization activities of GDDR7 with our partners and customers," said Frank Ross, Chief Architect and Distinguished Technologist of the Computing and Networking Business Unit at Micron. "The development of GDDR products using multi-level signaling helps to determine the path to meet the growing system bandwidth demands of the future. By adding leading RAS features, the GDDR7 standard can meet workload requirements far beyond the traditional graphics market."

NVIDIA's Vice President of GPU Product Management, Kaustubh Sanghani, said: "NVIDIA is pleased that our collaboration with JEDEC has helped make PAM signaling the foundational technology for GDDR7, helping customers to fully leverage the performance of GPUs."

Samsung's Executive Vice President and Head of Memory Product Planning, YongCheol Bae, said: "Artificial intelligence, high-performance computing, and premium gaming require high-performance memory to process data at unprecedented speeds." "GDDR7 32Gbps will achieve a 1.6 times performance improvement while offering the highest reliability and cost-effectiveness."

"With each generation of graphics memory, the industry has always been committed to achieving the grand goal of ensuring the highest speed while improving energy efficiency. SK Hynix is honored to participate in the GDDR7 standard work as a JEDEC member and is pleased to be able to provide customers with memory with the highest speed and excellent efficiency. The standard work once again will become a new opportunity for the industry to expand the memory ecosystem," said Sang Kwon Lee, Vice President of Product Planning at SK Hynix.

More technical details of GDDR7 are revealed:

36Gbps and PAM 3 encoding

When Samsung ridiculed the ongoing development of GDDR7 memory two years ago in October, Cadence did not disclose any other technical details of the upcoming specification. But they recently revealed some additional details about the technology. It turns out that GDDR7 memory will use PAM3 and NRZ signaling and will support many other features, aiming for a data rate of up to 36 Gbps per pin.

A brief history lesson on GDDR

At a higher level, the development of GDDR memory in recent years has been quite simple: updated memory iterations have increased signal rates, increased burst sizes to keep up with these signal rates, and improved channel utilization. However, none of these have significantly increased the internal clock of the storage cell. For example, GDDR5X and later GDDR6 increased their burst size to 16 bytes, then switched to dual-channel 32-byte access granularity. Although each generation of technology has faced challenges, in the end, industry participants have been able to increase the memory bus frequency through each version of GDDR to maintain performance improvements.Even the "simple" increase in frequency is becoming less simple. This has prompted the industry to look for solutions beyond speeding up the clock.

With GDDR6X, Micron and NVIDIA have replaced the traditional non-return-to-zero (NRZ/PAM2) encoding with four-level pulse amplitude modulation (PAM4) encoding. PAM4 uses four signal levels to double the effective data transfer rate to two data bits per cycle, thereby achieving a higher data transfer rate. In fact, because GDDR6X operates in PAM4 mode with a burst length of 8 bytes (BL8), it is not faster than GDDR6 at the same data rate (or more precisely, signal rate), but is designed to reach higher data rates than GDDR6 can easily achieve.

Four-level pulse amplitude modulation is superior to NRZ in terms of signal loss. For a given data rate, since PAM4 requires half the transmission baud rate of the NRZ signal, the resulting signal loss is significantly reduced. As higher frequency signals attenuate faster when passing through wires/traces - and according to digital logic standards, the distance of memory traces is relatively long - being able to run on a bus that is essentially a lower frequency ultimately makes some engineering and wiring easier to achieve higher data rates.

The trade-off is that PAM4 signals are generally more sensitive to random and induced noise; in exchange for a lower frequency signal, you must be able to correctly identify twice the states. In practice, this leads to a higher bit error rate at a given frequency. To reduce BER, equalization is needed at the Rx end and pre-compensation at the Tx end, which increases power consumption. Although it is not used in GDDR6X memory, forward error correction (FEC) is also a practical requirement at higher frequencies (such as PCIe 6.0).

Of course, the GDDR6X memory subsystem requires a brand new memory controller, as well as a brand new physical interface (PHY) for the processor and memory chips. These complex implementations are largely the main reason why four-level encoding has only been almost entirely used in high-end data center networks until recently, where there is a profit to support the use of this cutting-edge technology.

GDDR7: PAM3 encoding up to 36 Gbps/pin

Considering the trade-offs mentioned above when using PAM4 signals or NRZ signals, it turns out that the JEDEC members supporting the GDDR7 memory standard have taken a somewhat compromised position. GDDR7 memory is set to use high-speed transmission with PAM3 encoding, instead of using PAM4.

As the name suggests, PAM3 is between NRZ/PAM2 and PAM4, using three-level pulse amplitude modulation (-1, 0, +1) signals, allowing it to transmit 1.5 bits per cycle (or more precisely, 3 bits over two cycles). PAM3 provides a higher per-cycle data transfer rate than NRZ - reducing the need to migrate to higher memory bus frequencies and the signal loss challenges that come with it - while requiring a more relaxed signal-to-noise ratio than PAM4. Overall, GDDR7 promises higher performance than GDDR6, while having lower power consumption and implementation costs than GDDR6X.

For those keeping score, this is actually the second major consumer technology we have seen using PAM3. For similar technical reasons, USB4 v2 (also known as 80Gbps USB) is also using PAM3. So what exactly is PAM3?PAM3 is a data line technology capable of carrying -1, 0, or +1. What the system actually does is combine two PAM3 transmissions into a 3-bit data signal, such as 000 being a -1 followed by another -1. This becomes quite complex, so here is a table:

When we compare NRZ with PAM3 and PAM4, we can see that PAM3's data transfer rate is in the middle of NRZ and PAM4. The reason for using PAM3 in this case is to achieve higher bandwidth without the additional constraints that PAM4 requires to be enabled.

That being said, it remains to be seen how much power the 256-bit memory subsystem promised by Samsung with a data transfer rate of 36 Gbps will consume. The GDDR7 specification itself has not yet been approved, and the hardware is still being built (which is where tools like Cadence come into play). But remember, the demand for bandwidth in AI, HPC, and graphics is very high, and bandwidth will always be welcome.

Optimizing Efficiency and Power Consumption

In addition to increasing throughput, GDDR7 is expected to optimize memory efficiency and power consumption in various ways. Specifically, GDDR7 will support four different read clock (RCK) modes to enable it only when needed:

Always On: Always running and stopping in sleep mode;

Disabled: Stop running;Start with RCK Start command: The host can initiate RCK by issuing the RCK Start command before reading out data, and can stop it when needed using the RCK Stop command.

Start with Read: When the DRAM receives any command involving reading out data, RCK automatically begins to operate. It can be stopped using the RCK Stop command.

Additionally, the GDDR7 memory subsystem will be able to issue two independent commands in parallel. For instance, Bank X can be refreshed by issuing a Refresh per bank command on CA[2:0], while Bank Y can read simultaneously by issuing a read command on CA[4:3]. Moreover, GDDR7 will support a Linear Feedback Shift Register (LFSR) data training mode to determine the appropriate voltage levels and timing to ensure consistent data transfer. In this mode, the host will track each individual eye (connection), allowing it to apply the proper voltage for better power optimization.

Finally, GDDR7 will be capable of switching between PAM3 encoding and NRZ encoding based on bandwidth requirements. PAM3 will be used in high-bandwidth scenarios, while in low-bandwidth scenarios, the memory and memory controller can switch to the more energy-efficient NRZ.

Although GDDR7 promises significant performance improvements without a substantial increase in power consumption, the biggest question for the tech audience may be when the new memory will be available. With no firm commitment from JEDEC, there is no specific timeline for the release of GDDR7. However, considering the work involved and the release of the Cadence validation system, it is not unreasonable to expect GDDR7 to hit the field alongside the next-generation GPUs from AMD and NVIDIA. Keep in mind that these two companies tend to release new GPU architectures at a pace of about two years, which means we can start to see GDDR7 appearing in devices in late 2024.

Of course, given the number of AI and HPC companies currently working on products with high bandwidth demands, one or two may release solutions relying on GDDR7 memory sooner. But the widespread adoption of GDDR7 will almost certainly coincide with the mass production of the next-generation graphics cards from AMD and NVIDIA.

Post a comment