Researchers at Sandia National Laboratories are using the world's largest neuromorphic network to date, built on the basis of Intel's Loihi-2 processors. Compared to the previous generation of the network, Hala Point achieves a tenfold increase in neuron capacity, resulting in a twelvefold increase in performance.
Hala Point integrates processing, storage, and communication channels in a massively parallel structure and offers a storage bandwidth of a total of 16 petabytes per second (PB/s), a communication bandwidth of 3.5 PB/s between the cores, and a communication bandwidth of 5 terabytes per second (TB/s) between the chips. The system can process over 380 trillion 8-bit synapses and over 240 trillion neuron operations per second.
(Image: Intel Corporation)
Especially when it comes to the efficiency and parallel processing capability of artificial intelligence, developers cannot avoid getting their inspiration for their network architecture from one of nature's most efficient computers: the human brain. Characteristics of the human brain, such as the mentioned efficiency, the ability for asynchronous, event-based processing, learning and adaptability, inspire the development of AI systems, primarily neuronal and neuromorphic systems.
A neuromorphic system named Hala Point has recently been used at Sandia National Laboratories and it is the largest of its kind. It was built from over 1,000 Intel Loihi-2 processors. This is an advancement of the large-scale research system Pohoiki Springs, realized by Intel in 2019, which also used neuromorphic chips. Architectural improvements of Hala Point provide a tenfold increase in neuron capacity and a twelvefold increase in performance compared to its predecessor.
Why did Intel opt for a neuromorphic network? The answer is relatively simple: AI models require a large amount of computing power. "That's why we developed Hala Point, which combines the efficiency of deep learning with novel, brain-inspired learning and optimization features," says Mike Davies, director of the Neuromorphic Computing Lab at Intel Labs.
What does Hala Point bring to the table?
According to statements from Intel Labs, the neuromorphic system is capable of performing up to 20 quadrillion operations per second or supporting 20 petaops "with an efficiency exceeding 15 trillion 8-bit operations per second per watt (TOPS/W) when executing conventional deep neural networks." Hala Point could enable continuous real-time learning for AI applications such as scientific and technical problem solving, logistics, smart-city infrastructure management, and more in the future.
Hala Point uses 1,152 neuromorphic Loihi-2 processors to boost energy efficiency and performance. The compact overall package supports up to 1.15 billion neurons and 128 billion synapses, distributed over 140,544 neuromorphic computing cores and consuming a maximum of 2,600 watts of power. Additional computations can be made with more than 2,300 x86 processors.
With a total memory bandwidth of 16 petabytes per second and a processing speed of over 240 trillion neuron operations per second, Hala Point is extremely powerful. Using Loihi-based systems, AI inference and optimization problems can be solved with 100 times less energy and up to 50 times faster than traditional CPU and GPU architectures.
"When applied to bio-inspired spiking neural network models, the system can execute its full capacity of 1.15 billion neurons 20 times faster than a human brain, and up to 200 times faster at lower capacity. Although Hala Point is not intended for neuroscientific modeling, its neuron capacity is roughly equivalent to an owl's brain or the cortex of a Capuchin monkey," explain the designers.
Hala Point in bullet points
1,152 Loihi-2 processors manufactured on the Intel 4 process node,
are housed in a six-rack enclosure the size of a microwave.
The system supports up to 1.15 billion neurons and 128 billion synapses.
They are distributed over 140,544 neuromorphic computing cores and consume a maximum of 2,600 watts of power.
Hala Point offers a total memory bandwidth of 16 petabytes per second (PB/s),
a communication bandwidth of 3.5 PB/s between the cores,
and a communication bandwidth of 5 terabytes per second (TB/s) between the chips.
The system can process over 380 trillion 8-bit synapses and over 240 trillion neuron operations per second
By exploiting a sparse connectivity of up to 10:1 and event-driven activity, early results for Hala Point show that the system can achieve an efficiency of up to 15 TOPS/W2 for deep neural networks without needing to batch input data.
The future of Hala Point
Researchers at Sandia National Laboratories plan to use Hala Point for research in the area of advanced brain-scale computing. The organization will focus on solving computational science problems in the fields of device physics, computer architecture, computer science and informatics.
Date: 08.12.2025
Naturally, we always handle your personal data responsibly. Any personal data we receive from you is processed in accordance with applicable data protection legislation. For detailed information please see our privacy policy.
Consent to the use of data for promotional purposes
I hereby consent to Vogel Communications Group GmbH & Co. KG, Max-Planck-Str. 7-9, 97082 Würzburg including any affiliated companies according to §§ 15 et seq. AktG (hereafter: Vogel Communications Group) using my e-mail address to send editorial newsletters. A list of all affiliated companies can be found here
Newsletter content may include all products and services of any companies mentioned above, including for example specialist journals and books, events and fairs as well as event-related products and services, print and digital media offers and services such as additional (editorial) newsletters, raffles, lead campaigns, market research both online and offline, specialist webportals and e-learning offers. In case my personal telephone number has also been collected, it may be used for offers of aforementioned products, for services of the companies mentioned above, and market research purposes.
Additionally, my consent also includes the processing of my email address and telephone number for data matching for marketing purposes with select advertising partners such as LinkedIn, Google, and Meta. For this, Vogel Communications Group may transmit said data in hashed form to the advertising partners who then use said data to determine whether I am also a member of the mentioned advertising partner portals. Vogel Communications Group uses this feature for the purposes of re-targeting (up-selling, cross-selling, and customer loyalty), generating so-called look-alike audiences for acquisition of new customers, and as basis for exclusion for on-going advertising campaigns. Further information can be found in section “data matching for marketing purposes”.
In case I access protected data on Internet portals of Vogel Communications Group including any affiliated companies according to §§ 15 et seq. AktG, I need to provide further data in order to register for the access to such content. In return for this free access to editorial content, my data may be used in accordance with this consent for the purposes stated here. This does not apply to data matching for marketing purposes.
Right of revocation
I understand that I can revoke my consent at will. My revocation does not change the lawfulness of data processing that was conducted based on my consent leading up to my revocation. One option to declare my revocation is to use the contact form found at https://contact.vogel.de. In case I no longer wish to receive certain newsletters, I have subscribed to, I can also click on the unsubscribe link included at the end of a newsletter. Further information regarding my right of revocation and the implementation of it as well as the consequences of my revocation can be found in the data protection declaration, section editorial newsletter.
Neuromorphic computing is based on insights from neuroscience and integrates memory and computing power with high parallelism to minimize data movement - thus enabling energy savings in the gigawatt-hour range, as regular retraining with ever-growing datasets is eliminated.
Recent trends in the scaling of deep learning models have highlighted the challenges of AI sustainability and underscored the necessity of innovative approaches in hardware architecture. Results published this month at the International Conference on Acoustics, Speech, and Signal Processing (ICASSP) show that Loihi 2 has substantially improved the efficiency, speed, and adaptability of edge workloads. (sb)