
ISSCC 2026: GDDR7, LPDDR6 and HBM4 Push Memory to New Extremes
When the IEEE International Solid-State Circuits Conference ISSCC returns to San Francisco in February 2026, it will once again be the place where chipmakers quietly reveal the technologies that will define gaming GPUs, AI accelerators and mobile devices for years to come. Across five dense days of technical talks and poster sessions, researchers from industry and academia will walk through the circuits that sit at the heart of future products. This year, memory is set to steal the show, with headline papers detailing blisteringly fast GDDR7, energy efficient LPDDR6 and colossal HBM4 stacks aimed at next generation AI systems.
ISSCC is sometimes called the chip industrys Olympics for a reason. The conference program collects everything from digital processors and wireless transceivers to image sensors, radar front ends and ultra low power analog circuits. Designers present real silicon, explain what worked, what did not, and how they squeezed yet another performance gain out of physics. Alongside well known tracks such as processors, data converters, frequency synthesizers and circuits for AI, the 2026 edition highlights areas like die to die links, optical transceivers, extreme environment circuitry and compute in memory accelerators. Within that crowded agenda sits one of the most closely watched themes of all: high performance DRAM and memory interfaces.
The memory sessions span traditional DRAM, SRAM and non volatile memories, but this year the attention naturally gravitates to three initials: GDDR7, LPDDR6 and HBM4. These are not abstract acronyms; they are the standards that will underpin the next generation of graphics cards, gaming consoles, thin and light laptops, smartphones and AI servers. The work being presented at ISSCC 2026 is our first detailed glimpse at how fast they can run, how much bandwidth they can deliver and what kind of densities will be commercially realistic.
SK hynix aims for 48 Gbps GDDR7
On the graphics memory front, SK hynix is preparing to showcase a particularly aggressive GDDR7 implementation. Earlier roadmaps for GDDR7 typically talked about speeds up to around 40 Gbps per pin and densities up to 24 Gb per chip. The new paper jumps well beyond those early targets, describing a GDDR7 device running at 48 Gbps with a density of 24 Gb per die. That density translates into 3 GB of capacity for a single memory chip, a meaningful step up for high end GPUs that rely on a dozen or more packages to populate their memory buses.
To reach those speeds, SK hynix combines a symmetric dual channel architecture with carefully tuned clock distribution and reliability, availability and serviceability RAS features. GDDR7 continues the concept of splitting each package into two independent channels, which allows GPUs to fetch data more flexibly and keep their memory controllers busy. The companys paper emphasizes clock path optimization and signal integrity work that becomes critical when pushing pins all the way to 48 Gbps.
The raw numbers highlight just how big a jump this represents. Todays fastest GDDR6 devices commonly run around 28 Gbps per pin. At that rate, a single 32 bit GDDR6 chip delivers roughly 112 GB per second of bandwidth. SK hynixs 48 Gbps GDDR7, by contrast, would push that to about 192 GB per second from a single device, more than a seventy percent uplift in transfer rate. Multiply that across a 384 bit memory interface with twelve packages and you are looking at system bandwidth figures that move from already huge into truly extreme territory.
Of course, that does not mean gamers will immediately buy GPUs with 48 Gbps memory the moment ISSCC ends. Real products typically trail such research demos. Even so, the paper effectively shows the headroom available to graphics vendors over the coming generations. A card positioned above the current GeForce RTX 4090 class, such as a hypothetical GeForce RTX 5090, could combine higher density GDDR7 with these speed targets to deliver both massive frame buffers and enormous bandwidth, leaving current GDDR6X based boards several steps behind.
Putting GDDR7 in context: from GDDR5X to GDDR7
The GDDR roadmap over the last decade helps frame the importance of this leap. GDDR5X pushed graphics memory into the low double digit Gbps range. GDDR6 and GDDR6X then raised typical pin speeds into the mid teens and low twenties, powering GPUs like the GeForce GTX 1080 Ti, GeForce RTX 2080 Ti and GeForce RTX 4090. There were even ambitious announcements around variants such as GDDR6W, which on paper promised ultra high speeds and wider packages, though the most extreme numbers never quite became mainstream in shipping boards.
GDDR7 extends that progression again, not only by offering higher raw speed but also by scaling densities, which directly affects frame buffer size. An example configuration with a 384 bit bus and twelve packages can be mapped roughly as follows:
- GDDR5X era boards often paired 11 to 12 GB of capacity with system bandwidth in the mid hundreds of GB per second range.
- GDDR6 and GDDR6X GPUs widely adopted 12 to 24 GB buffers, pushing system bandwidth into the 600 to 1000 GB per second bracket.
- With 24 Gb GDDR7 chips, the same 12 package configuration can support 24 GB using 16 Gb dies or 36 GB using 24 Gb dies, while bandwidth climbs into roughly the 1.5 to 2.3 TB per second range when running in the high 20s to upper 40s Gbps per pin.
These numbers illustrate why memory work shown at ISSCC matters far beyond the lab. Higher bandwidth translates directly into better performance at extreme resolutions, more headroom for ray tracing and AI driven effects, and more freedom for developers to stream high resolution assets without stutters. Larger capacities mean bigger frame buffers, which are increasingly necessary for 4K and above gaming and for professional workloads such as 3D rendering and AI inference on consumer GPUs.
LPDDR6: next generation mobile and laptop memory
Graphics memory is only one side of the story. SK hynix is also bringing a paper on LPDDR6, the low power memory that powers smartphones, tablets, ultrabooks and now many AI capable laptops. The company describes a 1cnm LPDDR6 SDRAM that reaches pin speeds of up to 14.4 Gbps with a density of 16 Gb per die. The 1cnm designation refers to a cutting edge process node, enabling tighter geometries, lower operating voltages and better energy efficiency.
LPDDR standards are designed to offer a balance between high bandwidth and low power consumption. Higher pin speeds allow system on chips to feed CPU, GPU and NPU blocks more effectively without resorting to wider buses that would increase package complexity and power. At 14.4 Gbps, LPDDR6 can deliver substantial bandwidth even over the relatively narrow interfaces favored in mobile and ultra portable designs, paving the way for faster AI camera features, smoother high refresh gaming and better multitasking on phones and laptops.
Samsung is not standing still either. The company will present its own LPDDR6 research at ISSCC 2026, showcasing devices rated at 12.8 Gbps with the same 16 Gb density. While slightly behind SK hynix on peak data rate in these specific papers, it still represents a clear evolution beyond current LPDDR5X speeds. The presence of both major DRAM players in the LPDDR6 sessions underscores how central this memory type has become, not just for smartphones but for thin AI PCs and always connected devices that combine long battery life with heavy on device processing.
Samsung HBM4: 36 GB stacks for AI accelerators
The third memory pillar at ISSCC 2026 is high bandwidth memory HBM, the stacked DRAM technology that sits beside leading edge AI and HPC processors. Samsung will outline its HBM4 work in a paper describing stacks that reach 36 GB of capacity using a 12 high configuration. Each stack can supply up to around 3.3 TB per second of bandwidth, a remarkable figure when you consider that an entire high end graphics card in the GDDR6 era used to deliver less than a third of that.
HBM achieves this by vertically stacking DRAM dies connected with through silicon vias TSVs and placing the stack on a silicon interposer next to the logic die. Instead of pushing each pin to extreme speeds, HBM uses an enormous number of wide, relatively low speed connections in parallel, which both raises bandwidth and improves energy efficiency per bit transferred. With HBM4, Samsung is targeting the needs of next generation AI accelerators such as NVIDIAs Vera Rubin family, where both model sizes and batch sizes continue to grow and demand unprecedented memory performance.
Moving to 36 GB per stack means that a multi stack accelerator card can realistically carry well over 100 GB of extremely fast memory while still achieving an aggregate bandwidth measured in tens of terabytes per second. For large language models, recommendation engines and high resolution generative workloads, that bandwidth is as crucial as raw compute throughput. The ISSCC paper therefore offers a preview of how AI infrastructure will evolve beyond current HBM3 and HBM3E based designs.
Beyond memory: how ISSCC 2026 ties it all together
Although the headline grabbing numbers come from DRAM papers, the ISSCC 2026 program reminds us that memory does not exist in isolation. Sessions on die to die and high speed electrical transceivers complement the HBM work by describing how chiplets talk to each other. Next generation optical transceivers promise to move that data off package and between racks at equally impressive speeds. Circuits for AI and AI for circuits sessions show how machine learning is being used both as a workload and as a tool to optimize chip design.
There are also tracks devoted to topics such as neural and biomedical interfaces, unusual interconnects using light, circuits for extreme environments and energy harvesting and charging solutions. These research areas may seem distant from GDDR7 or HBM4 at first glance, but they share a common thread: pushing the limits of what is possible in silicon while keeping power and reliability under control. Together, they form the ecosystem that future GPUs, CPUs, AI accelerators, wearables and edge devices will rely upon.
What this means for gamers, creators and AI workloads
For gamers and creators, the upshot of the memory breakthroughs outlined at ISSCC 2026 is clear. Faster and denser GDDR7 means smoother frame rates at ultra high resolutions, richer textures, heavier ray tracing workloads and more room for video editing timelines and 3D scenes. LPDDR6 will quietly enable more responsive phones and laptops that can juggle local AI processing, high refresh displays and demanding apps without killing battery life. In the data center, HBM4 will help AI accelerators train and serve increasingly complex models without being starved for memory bandwidth.
ISSCC rarely makes front page news outside technical circles, yet the research it showcases often becomes reality in shipping products within a few short years. The GDDR7, LPDDR6 and HBM4 work being presented in February 2026 is a strong signal that the next wave of GPUs, AI accelerators and mobile system on chips will have far more capable memory subsystems than todays flagship hardware. For anyone following the evolution of gaming, professional graphics or AI computing, ISSCC 2026 is shaping up to be a landmark edition.