Go to main content

PDF

Description

Hyperdimensional computing represents a relatively different way to approach artificial intelligence than what has become mainstream. It focuses on the use of connectionist paradigms with a set of simple algebraic operations to form a powerful framework to represent objects. In this thesis, we show how these algebraic operations can be used to build parallel algorithms for hyperdimensional language models. We first ask the question of why this is useful from both an engineering and scientific point of view. Then we show how different parallel algorithms can be built to answer each of these questions. One algorithm focuses on distributing the data to different workers in order to minimize the runtime, while the other algorithm focuses on distributing the different embedding techniques in order for parallel learning to occur in a process inspired by the brain. Both algorithms are able to achieve superior efficiency, however the one that distributes the data over multiple workers is ultimately the most efficient. e further compare these methods to the popular word2vec models and show how they are able to outperform them on one of the original metrics used to test word embeddings, the TOEFL test. Finally we describe our vision for future work, in particular the use of algorithms for learning multimodal embeddings in parallel with joint hyperdimensional models of language and vision.

Details

Files

Statistics

from
to
Export
Download Full History
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS