Photonic computing How superior is computing with light?

From Hendrik Härter | Translated by AI 3 min Reading Time

Related Vendors

Computing with light using a photonic Native Processing Unit (NPU) leads to energy and speed savings, especially in computationally intensive applications. The NPU presented by Q.ANT is equipped with a standard PCIe interface.

Computing with photons for AI applications: The photonic Native Processing Unit (NPU) was able to impress in initial benchmark tests.(Image: freely licensed /  Pixabay)
Computing with photons for AI applications: The photonic Native Processing Unit (NPU) was able to impress in initial benchmark tests.
(Image: freely licensed / Pixabay)

With the introduction of the Photonic Native Processing Unit  (NPU) by Q.ANT via the standard PCIe interface, computing technology will change in the coming years. For electronics developers, this means working not only more efficiently, but also more sustainably. But what exactly makes the photonic architecture so special, and why is it relevant to the electronics industry?


When light plays a significant role

The photonic architecture fundamentally differs from classic CMOS processors: instead of processing data by switching transistors, the NPU uses light as a computing medium. This brings significant advantages.

"Processing with light significantly reduces energy consumption, as there are no losses from charging and discharging transistors. Additionally, control via electric fields instead of currents allows for a drastic reduction in power consumption and an increase in operating frequency," says Dr. Michael Förtsch, CEO of Q.ANT.

A clear example is provided by the Fourier transform: while classical processors require millions of transistors, the same operation is performed in a single optical element of the NPU. Wavelength multiplexing allows multiple operations to run in parallel, opening up entirely new dimensions for AI inference and machine learning.

Communication via standard PCIe interface

The block diagram shows everything that's inside the NPU: from the hardware layer to the first software layer.(Image: Q.ANT)
The block diagram shows everything that's inside the NPU: from the hardware layer to the first software layer.
(Image: Q.ANT)

For developers already working with established frameworks like Keras, TensorFlow, or PyTorch, the question arises: How easy is the integration of the new technology? "The NPU uses the standard PCIe interface, so no fundamental changes to existing systems are necessary," explains Dr. Förtsch. "With our Q.ANT Toolkit, developers can simply recompile their existing source code to utilize photonic computing."

The toolkit will provide basic operations for AI inference. Integration into one of the major AI frameworks is planned, leading to seamless integration and compatibility with industry standards like Keras, PyTorch, or TensorFlow.

End customers do not need to reprogram their source code to be compatible with the technology. "The source code simply needs to be recompiled with our compiler extension. That is our philosophy, and we believe that this way we can make photonic computing a part of this chip ecosystem," says Förtsch.

In the first version, the toolkit enables basic AI inference operations. A comprehensive integration into established frameworks is being prepared to ensure full compatibility with common industry standards.

The initial benchmark results are impressive.

The NPU shows impressive results in benchmark tests: In experiments with MNIST datasets, the number of parameters could be reduced by 43 percent and the necessary operations by 46 percent—without loss of accuracy. Simulations of larger models, such as GPT architectures, also show promising results.

"Our photonic design scales better than classical processors, especially with large problems. The energy savings in AI inference is theoretically one thirtieth of what current technologies require," says Förtsch.

The impacts of photonic computing go far beyond energy savings. Especially in neural networks, activation functions, traditionally a complex issue, could be significantly optimized by the photonic architecture. The use of analog functions like sine and exponential accelerates training and reduces computational effort.

For the first time, we can train networks that would not be feasible with digital GPUs. This opens up entirely new possibilities, especially in high-performance computing.

Dr. Michael Förtsch, CEO of Q.ANT


Train neural networks more efficiently

Photonic computing opens up entirely new potentials. "For the first time, one can train networks that are completely different from what a GPU can ever do," says Förtsch. In AI networks, the activation function is the most underestimated tool. It is a central component of neural networks in AI. It determines how the input data is processed in an artificial neuron and significantly influences whether and in what form the data is passed on to the next layer of the neural network.

"In neural networks, there are input parameters and network depths. Between the layers are the activation functions. In the digital world, it is very complex to implement these functions in a parametrizable way in a digital circuit. In the photonic world, these functions can be a path to significantly more efficient training in the AI context. Through the analog computer (the NPU), we have access to sine, cosine, and exponential functions. The correct use of these new possibilities can lead to a significant reduction in parameters and thus to increased efficiency and acceleration in the training of neural networks. Our current focus is on HPC data centers. Here, in the future, dedicated compute nodes with NPUs (and GPUs) will be available, using the optimal hardware for each application," Förtsch concludes. (heh)

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent

Link: Handwriting Rationalized empowered by Native Computing (external link).