Summary

Computational storage (or in-situ processing or in-storage compute), brings high-performance compute to storage. Computational storage involves putting a computing system—a central processing unit (CPU), memory (DRAM) and input/output (I/O)—inside or close to a solid-state drive (SSD) to perform tasks on behalf of the system’s main processor. It is part of a broader research path in computing to reduce the latency and energy cost of shuffling data between memory and the processor, most notably with Neuromorphic Computing.

Viability (5)

Computational storage has been around since 2010 and has been known by names like scale-in, in-situ processing, compute to data, and in-data processing. In 2018, The Storage Networking Industry Association (SNIA) brought together the ecosystem around a Computational Storage Special Interest Group (SIG) now with 46 participants and 232 members to agree on the term computational storage and to develop standards. There are few R&D challenges really, mostly the market all about market education and standardisation. The catalyst for adoption will come later in 2022 with a full standard from the SNIA consortia which has broad industry participation. There is significant supply and demand side interest and we should fast growth rates of 15%+ per year. The growth could come in the early part of 2025-2030 if IT spend materially declines due to a drawn out global recession.

Drivers (5)

Total global data storage is projected to exceed 250 zettabytes by 2025. With many applications moving from the cloud to the edge, NVMe and PCIe Gen 3/4/5 are transport technologies helping shift the data but there is a cost. As datasets get larger and are used by more applications, the storage-to-compute bottleneck and network bandwidth become some of the most important considerations. 62 percent of the energy consumed in computing according to some estimates gets spent on moving data, i.e. from storage to DRAM, DRAM to the CPU, or the CPU to I/O devices. This is already a major concern for datacentres, content delivery networks, and increasing for small edge devices like security cameras. For example, at 30 frames per second, a single 1080p camera can generate 2GB per hour or 17.5TB per year. Over 1 billion cameras, that’s over 13 zettabytes per year, or around 8,000x more data than gets posted to Facebook a year.

Novelty (3)

Computational storage competes with a range of technologies designed to increase performance and reduce power consumption. It can be seen as an offloading technology, a method to reduce the demands on the CPU, network and storage devices. It’s not quite as disruptive as Optical Computing in terms of performance or Nanomechanical Computing in terms of efficiency. But it’s also more incremental than Carbon Nanotube Field-Effect Transistor or Persistent Memory. Computational storage is a nice improvement in the computing architecture which can be complementary to new materials and transistor designs, but doesn’t offer a new computing paradigm that is orders of magnitude better along some dimension.

Diffusion (4)

Because computational storage is an architectural change, the semiconductor supply chain doesn’t have to change a great deal. The same materials, components and tooling can mostly be repurposed. The main restraint in the 2010s when this was scale-in was space and power consumption as first-gen FPGAs were too big and used too much power. Modern FPGA-based products from Xilinx or even CPU-based from ARM can slot into server racks in almost a plug-and-play way today. Standardisation is the last shoe to drop, with SNIA recently releasing a 0.8 specification, with a full specification likely coming in 2022. Plays into the data-centric computing and data mesh narratives which are becoming de jour in the enterprise.

Impact (2)

Computational storage is an incremental innovation delivering lower latencies and lower power consumption, as will certainly has an important role to play in the datacentre and for low-power edge where space and network capacities are a challenge. Long-term, memory and processing will be co-located both on the chip (Neuromorphic Computing) and at the system level because it is more efficient. Computational storage is part of the next 15 years continuation of CMOS-based Moores’ Law and architecturally sets the foundation for longer-term performance improvements.

Sources

  1. Computational Storage: A New Way to Boost Performance, https://www.arm.com/blogs/blueprint/computational-storage