AIBDWednesday, 25 March 2026
Priya Kapoor
Technical Architecture Correspondent

3D Memory Stacks Now Shrink AI Data Travel from Centimeters to Nanometers

Belgian startup's vertical architecture just validated with TSMC, attacking AI's biggest bottleneck

·3 min read
3d-memoryai-architecturememory-wallsemiconductorvertical-computetsmcimec
ShareShare on X
3D Memory Stacks Now Shrink AI Data Travel from Centimeters to Nanometers

Something quietly extraordinary happened three weeks ago in Belgium. While everyone obsesses over parameter counts and training FLOPs, a team of imec veterans proved they could stack memory vertically on compute logic within a single 300mm wafer process.

The announcement from Vertical Compute feels almost mundane: another European deeptech funding round, another €57 million raised. But the technical validation underneath represents a fundamental shift in how we think about AI's primary constraint. Not compute. Memory.

The Skyscraper Architecture

Sébastien Couet spent over a decade at imec developing next-generation memory roadmaps. His invention—now Vertical Compute's core technology—takes the 2D memory layout that's dominated semiconductors since the 1970s and literally turns it on its side.

"Being able to store data vertically is like using a skyscraper instead of a single story home," Couet explains. The analogy isn't just cute; it's architecturally precise. Traditional memory sits adjacent to compute cores, requiring data to travel centimeters through interconnects. Vertical Integrated Memory stacks storage directly above logic on the same die. Data movement: nanometers instead of centimeters.

The physics matter. Every millimeter matters. When you're running trillion-parameter models that need to access weights thousands of times per second, distance becomes latency becomes energy becomes cost.

TSMC's Silicon Validation

Here's what makes this more than research theater: TSMC manufactured their first test chip. The world's most advanced foundry doesn't partnership lightly with unknowns. When Taiwan Semi agrees to tape out your design, it means your process actually works at industrial scale.

TSMC's 3D integration roadmap has been public for years—their CoWoS and SoIC platforms already enable heterogeneous chiplet integration. But Vertical Compute's approach goes further, integrating memory not just adjacent to compute but stacked directly on top within the same manufacturing flow.

This isn't theoretical anymore. It's validated silicon.

The Memory Wall Crisis

The timing isn't coincidental. AI systems have hit what researchers call the "memory wall"—a fundamental bottleneck where memory bandwidth can't keep pace with compute demand. Over the past 20 years, server processing power scaled 3× every two years while DRAM bandwidth managed only 1.6× growth over the same period.

Consider the math on today's frontier models. GPT-4 scale models require roughly 2TB of memory for inference. Current HBM3e stacks provide 24GB each. That means 80+ memory stacks per system, connected through complex packaging and high-speed interconnects. The result: enormous power consumption for data movement, thermal challenges, and system complexity.

Vertical Compute's architecture collapses this complexity. Memory sits directly above compute cores; data travels nanometers through vertical interconnects rather than millimeters through package routing. The company claims 80% energy savings compared to traditional architectures.

But energy isn't the only win. Density increases by orders of magnitude—more memory in less space. Latency drops to near-SRAM levels while maintaining DRAM-like capacity. And because it's delivered through chiplets, existing processor architectures can integrate the technology without wholesale redesign.

Europe's Semiconductor Ambitions

The funding round tells its own story about European semiconductor strategy. Quantonation led, joined by government-backed funds including Flanders Future Techfund and multiple Belgian regional investors. This isn't venture capital chasing quick returns; it's strategic investment in semiconductor sovereignty.

"We want to recruit the very best from all over Europe, and finally put Europe at the forefront in terms of tech," says CEO Sylvain Dubois, formerly of Google's silicon division.

The team reflects this ambition: Couet from imec's magnetic memory program, Dubois from hyperscaler silicon development, backed by Europe's most sophisticated deeptech investors. They're building not just a company but a European answer to Asia's memory dominance.

The Six-Month Horizon

Vertical Compute expects first revenue between 2027 and 2028. That timeline puts them squarely in the path of the next AI scaling wave. Current models already strain existing memory architectures; the five trillion parameter systems expected by 2027 will require fundamentally different approaches.

But the broader implications extend beyond AI. Every data-intensive application—from autonomous vehicles to real-time analytics—faces similar memory constraints. Vertical integration doesn't just solve AI's bottleneck; it opens new architectural possibilities across computing.

The von Neumann architecture has dominated computing for 80 years, separating memory from processing. Vertical Compute's approach suggests we're entering a post-von Neumann era where compute and storage merge into unified systems.

That's not just an engineering optimization. It's architectural evolution.

ShareShare on X
← Back to Dispatch