Revolutionizing Computer Processing: Introducing SHMT for Doubled Speeds

Scientists have introduced a novel method dubbed ‘simultaneous and heterogeneous multithreading’ (SHMT) to enhance computer processing speeds. This approach utilizes existing hardware such as graphics processing units (GPUs), hardware accelerators for artificial intelligence (AI) and machine learning (ML), or digital signal processing units concurrently, effectively doubling the processing speeds of devices like smartphones, tablets, personal computers, and servers.
Hung-Wei Tseng, an associate professor of electrical and computer engineering at UC Riverside, has proposed a significant shift in computer architecture to achieve this feat, detailed in his recent paper titled “Simultaneous and Heterogeneous Multithreading.” Tseng noted the prevalent inclusion of GPUs, AI/ML accelerators, or digital signal processing units in modern computer devices, functioning as essential components. However, the separate processing of information by these components often creates bottlenecks as data is transferred from one unit to another.
In their study, Tseng and UCR computer science graduate student Kuan-Chieh Hsu introduce the concept of SHMT, which entails a framework implemented on an embedded system platform. This framework simultaneously utilizes a multi-core ARM processor, an NVIDIA GPU, and a Tensor Processing Unit hardware accelerator. Remarkably, this system achieved a 1.96 times speedup and a 51% reduction in energy consumption.
Tseng emphasized the significance of not needing to add new processors, as the necessary components are already present in existing devices. The implications of utilizing existing processing components concurrently are substantial, potentially reducing computer hardware costs and lowering carbon emissions from energy consumption in data processing centers. Furthermore, it could alleviate the demand for scarce freshwater used in server cooling processes.
However, Tseng’s paper highlights the necessity for further investigation into system implementation, hardware support, code optimization, and the types of applications that would benefit most from this approach. The paper was presented at the 56th Annual IEEE/ACM International Symposium on Microarchitecture in Toronto, Canada, receiving recognition from Tseng’s peers in the Institute of Electrical and Electronics Engineers (IEEE). It has been selected as one of 12 papers to be included in the group’s “Top Picks from the Computer Architecture Conferences” issue slated for publication in the upcoming summer.
3 / 3

 

Leave a Reply

Your email address will not be published. Required fields are marked *