Photonic computing, exemplified by Lightmatter, Luminous, and Lightelligence, leverages the wave properties of light for information processing. This approach exploits wavelength division multiplexing (WDM) to achieve parallelism and energy efficiency in matrix operations, crucial for all transformer-based frontier models. The technology utilizes silicon photonics to integrate optical components on-chip, enabling high-speed, low-latency computations. Unlike digital systems that scale linearly with transistor count, photonic processors potentially offer O(1) time complexity for matrix-vector multiplications. This contrasts with O(n^2) scaling in electronic systems. This type of computation can delivery extreme bandwidth capabilities exceeding 100 Tbps and energy efficiencies potentially reaching sub-femtojoule per operation. For context, Nvidia Blackwell delivers 1.8 Tbps memory bandwidth and 50 TOPS/W (INT8) so photonic chips can in theory be 55x better than Blackwell in bandwidth and 20x better for energy consumption.
However, the technology faces significant challenges. Electro-optical and opto-electronic conversions incur substantial energy overhead, potentially negating efficiency gains in certain scenarios. Current estimates suggest that these conversions can consume 10-100 femtojoules per bit, significantly impacting the sub-femtojoule per operation efficiency of the photonic computations themselves. Current photonic integrated circuits struggle with thermal stability, requiring precise temperature control within ±0.1°C to maintain consistent operation. Signal integrity in complex photonic circuits remains problematic due to issues like crosstalk and non-linear effects, with signal-to-noise ratios degrading by up to 20 dB in large-scale photonic circuits. Manufacturing scalability presents another hurdle. Photonic chips require specialized fabrication processes incompatible with standard CMOS facilities, necessitating significant investment in new manufacturing infrastructure. While cost reductions are expected with scale, achieving price parity with electronic counterparts remains a distant goal. Current photonic chip production costs are estimated to be 10-100 times higher per unit area compared to advanced CMOS processes.
Finally, integration with existing computing ecosystems poses additional complications. Interfacing with electronic memory systems and CPUs introduces latency and energy costs, potentially offsetting the benefits of photonic computation in practical applications. The latency introduced by these interfaces can range from tens to hundreds of nanoseconds, which can be significant for certain time-critical AI workloads.
Worth watching: