Go to main content

PDF

Description

Low-power machine learning techniques on hardware are increasingly vital and important for a wide range of applications ranging from wearables such as smartwatches to devices on the edge. Hyperdimensional Computing (HDC) is one proposed algorithm that has been consistently proven to demonstrate high accuracy with minimal power consumption, across diverse classification tasks such as gesture recognition [20] or speech recognition [12]. HDC processors have been implemented in the past, however their energy-efficiency has largely been limited by the costly hypervector memory storage, which grows linearly with the number of input features or sensors. In this work, we propose a novel method of combining HDC Sensor Fusion with CA rule 90 in conjunction with vector folding, to reduce the memory requirement of this processor to near 0, reaching an energy efficiency of 39.1 nJ/prediction; a 4.9x energy efficiency improvement, 9.5x per channel, over the state-of-the-art HDC processor. This processor is also compared with an analogous SVM implementation, demonstrating a 9.5x energy efficiency improvement over SVM when scaling to a large number of 214 channels. The energy-efficiency and scalability of HDC is testament to its applicability on a broad range of low-power machine learning tasks today. As such, exploration of HDC on other classification tasks such as TinyML Keyword Spotting is also performed, as part of an ongoing effort to pave the way for HDC to become the paradigm of choice for high-accuracy, low-power, and real-time classification tasks.

Details

Files

Statistics

from
to
Export
Download Full History
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS