The extended time horizons of exotic computing solutions (e.g. practical optical computing, reversible computing, computing using the native physics of materials or computing using thermodynamics itself) are problematic for venture investment. Yet, ventures targeting near-term computing products based on the same CMOS and memory technology in mainstream use today will be very challenged in meaningfully differentiating themselves from those produced by large and powerful incumbents.

Both are compromised from a VC perspective- is there a middle way?

<aside> 💡

Geoff Hinton coined the term Mortal Computing in 2022, speaking of a software program evolved to run on one specific instance of a hardware device, which he suggests would be composed of extremely efficient but not entirely reliable computing elements, and that this might be the only means of being able to realize practical computing devices based on the revolutionary device technology. See his presentation on Youtube.

</aside>

This note explores the concept of Mortal Computing and finds that while the development of novel and unreliable computing technology is certainly one application for it, it is also broadly applicable to a spectrum of near-term and long-term computing innovation and therefore a candidate for new deeptech ventures with a wide range of time horizons, strong differentiation against the prevailing computing paradigm and long-term potential.


Immortal Computing

To properly define Mortal Computing, it is first necessary to define its ancestor, Immortal Computing - which is shackled by the following two constraints (of which the latter is by far the stronger one).

  1. Deterministic: A given program must produce Identical results with Deterministic Precision and Predictable Performance when run multiple times on the same device.
  2. Fungible: A given program satisfies the Deterministic Precision and Predictable Performance conditions when run on multiple devices of the same type and satisfies just the Deterministic Precision condition when run on different devices with compatible instruction set.

It is immediately apparent that the immortal component is the program and not the hardware - all physical things have finite lifetimes - but software that can be stored indefinitely and run anywhere is truly immortal.

The core design mechanisms of Immortal Computers are:

  1. Zero Defects: The part is offered for sale based on having no user visible defects in hardware achieved via thorough testing and redundant hardware. The cost is yield loss.
  2. Design Margin; Manufacturing and environmental variability is addressed via careful margining at design time plus assumptions/limits on range of variability. The cost is performance loss.
  3. Static Compensation: Variability is further addressed by simple static settings for deployment, such as running fast and energy hungry parts at a lower voltage than slow and lower energy parts. The cost is performance loss.
  4. Simple Automatic Adaptation: Basic control loops adjust just one or two system variables dynamically to keep operation within spec. Existing silicon devices employing techniques such as the dynamic voltage and frequency scaling that keep mobile processors or GPUs from overheating are the primary example. The cost is some loss of performance determinism.

A key aspect of these solutions is that they outperform the spec sheet under almost all conditions, thus their intrinsic performance is quite variable, but on the upside.

Recent developments in computing related to rising power densities and thermally limited devices, advanced packaging, wafer-scale integration and warehouse scale computing are already compromising the standard Immortal paradigm. So, there are already instances of what we could describe as “Weakly” Immortal Computing in which more advanced forms of adaptation are employed at the cost of imperfect determinism and fungibility. Some good examples of existing Weakly Immortal computing solutions are: