Wireless inter-rack communication emerges as a potentially transformative technology for data centers, particularly in the context of energy-intensive AI workloads. For AI workloads, traditional copper-based connections, Silicon Photonics-based Active Optical Cables (AOCs), and Co-packaged Optics (CPO) have made significant strides. However, wireless inter-rack communication offers a unique approach that could address some of the specific challenges posed by AI computations.
This technology utilizes millimeter-wave (mm-Wave) frequencies, typically in the 60 GHz or 80 GHz bands, or even explores terahertz (THz) spectrum for short-range, high-bandwidth links. Current prototypes demonstrate data rates exceeding 100 Gbps per link, achieved through advanced modulation schemes such as 256-Quadrature Amplitude Modulation (256-QAM) and Multiple-Input Multiple-Output (MIMO) techniques. For AI workloads, which often involve massive data transfers between compute nodes during training and inference processes, the high bandwidth potential of wireless communication could prove advantageous. The ability to establish direct, low-latency links between racks without the need for intermediate switches could potentially reduce communication bottlenecks in distributed AI training scenarios. Moreover, the flexibility offered by wireless connections aligns well with the dynamic nature of AI workloads. As AI models grow in size and complexity, data centers need to adapt quickly to changing computational requirements. Wireless inter-rack communication allows for more fluid redistribution of computing resources without the constraints of physical cabling, potentially enabling more efficient utilization of AI accelerators and specialized hardware. Energy efficiency, a critical concern for AI workloads, presents both opportunities and challenges for wireless technology. While eliminating cabling can reduce overall system power consumption, the energy required for wireless signal processing and transmission must be carefully optimized to compete with the efficiency of advanced wired solutions like Silicon Photonics-based AOCs and CPO.
However, wireless inter-rack communication has many challenges to overcome. Beam steering accuracy becomes crucial when dealing with the rapid, burst-like traffic patterns typical of AI training workloads. Multi-path interference mitigation takes on added importance in dense server environments packed with AI accelerators and cooling systems. It’s a challenging environment to simulate and design, especially considering the importance of uptime and reliability in datacentres. Ensuring reliable connectivity in these dynamic environments, where rack configurations may change frequently to accommodate evolving AI hardware, remains a significant hurdle. When compared to Silicon Photonics-based AOCs and CPO, wireless inter-rack communication currently lags in overall bandwidth capacity and energy efficiency, critical factors for AI workloads. While AOCs and CPO can achieve multi-terabit per second speeds with relatively low power consumption, wireless solutions are still striving to reach comparable performance levels necessary for the most demanding AI applications.
But we could combine wireless technology with existing networking solutions in AI-focused data center. It could provide an additional layer of connectivity, offering redundancy and flexibility in network topologies. This hybrid approach might be particularly valuable for handling the varied communication patterns of different AI workloads, from the high-volume data transfers of training to the lower-latency requirements of inference tasks. As data centers continue to evolve to meet the computational demands of increasingly sophisticated AI models, wireless inter-rack communication may find specialized applications. For instance, it could be particularly useful in edge computing scenarios where AI inference is performed in smaller, more distributed data centers that require flexible and rapidly deployable networking solutions. While widespread adoption of wireless inter-rack communication in AI-focused data centers is likely a longer-term prospect, with significant implementation not expected until 2030 or beyond, the technology's development trajectory aligns well with the projected growth and evolution of AI workloads. As such, it remains a technology worth monitoring closely for data center operators but certainly not for the next 3-5 year build out.
Worth watching: