Research Computer performance doubled - without additional hardware!

From Henning Wriedt* | Translated by AI 2 min Reading Time

Imagine being able to double the computing power of a smartphone, tablet, PC or server by better utilizing the existing hardware in these devices.

Hung-Wei Tseng, Associate Professor of Electrical and Computer Engineering at UC Riverside, describes a paradigm shift in computer architecture in his paper.(Image: Freely licensed /  Pixabay)
Hung-Wei Tseng, Associate Professor of Electrical and Computer Engineering at UC Riverside, describes a paradigm shift in computer architecture in his paper.
(Image: Freely licensed / Pixabay)

Hung-Wei Tseng, associate professor of Electrical and Computer Engineering at UC Riverside, has described a paradigm shift in computer architecture that enables exactly that in a recently published paper titled "Simultaneous and Heterogeneous Multithreading".

Tseng explained that today's computers increasingly have graphics processors (GPUs), hardware accelerators for artificial intelligence (AI) and machine learning (ML), or digital signal processing units as essential components. These components process information separately from one another, with information being passed from one processing unit to the next, leading to a bottleneck.

In their paper, Tseng and UCR computer science student Kuan-Chieh Hsu introduce a method they call “simultaneous and heterogeneous multithreading" or SHMT. They describe their development of a proposed SHMT framework on an embedded system platform that simultaneously uses a multi-core ARM processor, an NVIDIA GPU and a Tensor Processing Unit hardware accelerator.

The system achieved a 1.96-fold acceleration and a 51 percent reduction in energy consumption. "You don't have to add new processors, because you already have them," Tseng said. The implications are enormous.

By simultaneously using existing processing components, the cost of computer hardware could be reduced and at the same time, the carbon dioxide emissions generated by the energy needed to run servers in warehouse-sized data centers could be reduced. In addition, this could reduce the need for scarce fresh water for server cooling.

However, Tseng points out that further investigations are needed to answer various questions about system implementation, hardware support, code optimization, and the type of applications that will benefit the most.

The paper was presented at the 56th Annual IEEE/ACM International Symposium on Microarchitecture, which took place in Toronto, Canada in 2023. Tseng's professional colleagues of the 'Institute of Electrical and Electronics Engineers' (IEEE) recognized the work, including it as one of 12 works in the upcoming summer issue of the "Top Picks from the Computer Architecture Conferences". (mbf)

* Henning Wriedt is a freelance specialist author.

Link: UC Riverside

Subscribe to the newsletter now

Don't Miss out on Our Best Content

By clicking on „Subscribe to Newsletter“ I agree to the processing and use of my data according to the consent form (please expand for details) and accept the Terms of Use. For more information, please see our Privacy Policy. The consent declaration relates, among other things, to the sending of editorial newsletters by email and to data matching for marketing purposes with selected advertising partners (e.g., LinkedIn, Google, Meta)

Unfold for details of your consent