Stanford EE Computer Systems Colloquium

4:30 PM, Wednesday, Oct 25, 2017
NEC Auditorium, Gates Computer Science Building Room B3
http://ee380.stanford.edu

Computing with High-Dimensional Vectors

Pentti Kanerva
Stanford CSLI & UC Berkeley Redwood Center for Theoretical Neuroscience
About the talk:

The correct date for this talk is October 25, 2017 at 4:30PM.

Computing with high-dimensional vectors complements traditional computing and occupies the gap between symbolic AI and artificial neural nets. Traditional computing treats bits, numbers, and memory pointers as basic objects on which all else is built. I will consider the possibility of computing with high-dimensional vectors as basic objects, for example with 10,000-bit words, when no individual bit nor subset of bits has a meaning of its own--when any piece of information encoded into a vector is distributed over all components. Thus a traditional data record subdivided into fields is encoded as a high-dimensional vector with the fields superposed.

Computing power arises from the operations on the basic objects--from what is called their algebra. Operations on bits form Boolean algebra, and the addition and multiplication of numbers form an algebraic structure called a "field." Two operations on high-dimensional vectors correspond to the addition and multiplication of numbers. With permutation of coordinates as the third operation, we end up with a system of computing that in some ways is richer and more powerful than arithmetic, and also different from linear algebra. Computing of this kind was anticipated by von Neumann, described by Plate, and has proven to be possible in high-dimensional spaces of different kinds.

The three operations, when applied to orthogonal or nearly orthogonal vectors, allow us to encode, decode and manipulate sets, sequences, lists, and arbitrary data structures. One reason for high dimensionality is that it provides a nearly endless supply of nearly orthogonal vectors. Making of them is simple because a randomly generated vector is approximately orthogonal to any vector encountered so far. The architecture includes a memory which, when cued with a high-dimensional vector, finds its nearest neighbors among the stored vectors. A neural-net associative memory is an example of such.

Circuits for computing in high-D are thousands of bits wide but the components need not be ultra-reliable nor fast. Thus the architecture is a good match to emerging nanotechnology, with applications in many areas of machine learning. I will demonstrate high-dimensional computing with a simple algorithm for identifying languages.

Slides:

Download the slides for this presentation in PDF format.

Videos:

About the speaker:

[speaker photo] Pentti Kanerva came to Stanford from Finland in 1967 to work in Patrick Suppes' computer laboratory dedicated to computer-assisted instruction. In addition to programming, he designed and built hardware to network city-wide clusters of computer terminals. Kanerva's life-long interest in understanding brains in computing terms motivates his study of computation with high-dimensional vectors. His PhD thesis in Philosophy was published in the book Sparse Distributed Memory, MIT Press, and led to research at NASA Ames Research Center, Swedish Institute of Computer Science, Redwood Neuroscience Institute and, presently, UC Berkeley's Redwood Center for Theoretical Neuroscience. This research has been published in papers on Binary Spatter Code, Random Indexing, and Hyperdimensional Computing.

Contact information:

Pentti Kanerva s pkanerva (at) csli.stanford.edu