Recent advances in artificial intelligence may appear revolutionary, but JEDEC is keeping an evolutionary approach for Graphics Double Data Rate (GDDR) standards, even as it’s being increasingly used for AI applications.
The JEDEC Solid State Technology Association’s GDDR7 standard continues the generation-to-generation tradition of double the bandwidth and double the capacity while keeping a lid on power consumption.
The latest iteration of GDDR offers twice the bandwidth of its predecessor, reaching up to 192 GB/s per device. GDDR7 doubles the number of independent channels, from two in GDDR6 to four.
It’s also the first JEDEC standard DRAM to use the pulse-amplitude modulation (PAM) interface for high-frequency operations. Using a PAM3 interface improves the signal-to-noise ratio for high-frequency operation while enhancing energy efficiency. PAM3 also offers a higher data transmission rate per cycle, resulting in improved performance versus the traditional non-return-to-zero (NRZ) interface.
GDDR7 addresses the industry’s need for reliability, availability and serviceability (RAS) by incorporating the latest data integrity features, including on-die error-correction coding with real-time reporting, data poison, Error check and Scrub, and command address parity with command blocking (CAPARBLK).
With companies like Micron Technology selling out of HBM3, GDDR can be a viable alternative for some AI workloads, and AI demands are shaping the evolution of GDDR. One of the reasons GDDR has found uses beyond its initial target market is its ability to do matrix algebra, which helps GPUs handle AI workloads and computer-generated special effects.
GPU maker Nvidia wanted a faster, more reliable memory—hence, the adoption of PAM, as transmitting data at super-fast rates means channel integrity becomes a bigger concern.
Read my full story for EE Times.
Gary Hilson is a freelance writer with a focus on B2B technology, including information technology, cybersecurity, and semiconductors.