First Advisor

Christof Teuscher

Date of Publication

Spring 6-3-2019

Document Type

Dissertation

Degree Name

Doctor of Philosophy (Ph.D.) in Electrical and Computer Engineering

Department

Electrical and Computer Engineering

Language

English

Subjects

Memristors -- Research, Natural computation

DOI

10.15760/etd.6877

Physical Description

1 online resource (xix, 179 pages)

Abstract

In this thesis, I propose novel brain-inspired and energy-efficient computing systems. Designing such systems has been the forefront goal of neuromorphic scientists over the last few decades. The results from my research show that it is possible to design such systems with emerging nanoscale memcapacitive devices.

Technological development has advanced greatly over the years with the conventional von Neumann architecture. The current architectures and materials, however, will inevitably reach their physical limitations. While conventional computing systems have achieved great performances in general tasks, they are often not power-efficient in performing tasks with large input data, such as natural image recognition and tracking objects in streaming video. Moreover, in the von Neumann architecture, all computations take place in the Central Processing Unit (CPU) and the results are saved in the memory. As a result, information is shuffled back and forth between the memory and the CPU for processing, which creates a bottleneck due to the limited bandwidth of data paths. Adding cache memory and using general-purpose Graphic Processing Units (GPUs) do not completely resolve this bottleneck.

Neuromorphic architectures offer an alternative to the conventional architecture by mimicking the functionality of a biological neural network. In a biological neural network, neurons communicate with each other through a large number of dendrites and synapses. Each neuron (a processing unit) locally processes the information that is stored in its input synapses (memory units). Distributing information to neurons and localizing computation at the synapse level alleviate the bottleneck problem and allow for the processing of a large amount of data in parallel. Furthermore, biological neural networks are highly adaptable to complex environments, tolerant of system noise and variations, and capable of processing complex information with extremely low power.

Over the past five decades, researchers have proposed various brain-inspired architectures to perform neuromorphic tasks. IBM's TrueNorth is considered as the state-of-the-art brain-inspired architecture. It has 106 CMOS neurons with 256 x 256 programmable synapses and consumes about 60nW/neuron. Even though TrueNorth is power-efficient, its number of neurons and synapses is nothing compared to a human brain that has 1011 neurons and each neuron has, on average, 7,000 synaptic connections to other neurons. The human brain only consumes 2.3nW/neuron.

The memristor brought neuromorphic computing one step closer to the human brain target. A memristor is a passive nano-device that has a memory. Its resistance changes with applied voltages. The resistive change with an applied voltage is similar to the function of a synapse. Memristors have been the prominent option for designing low power systems with high-area density. In fact, Truong and Min reported that an improved memristor-based crossbar performed a neuromorphic task with 50% reduction in area and 48% of power savings compared to CMOS arrays. However, memristive devices, by their nature, are still resistors, and the power consumption is bounded by their resistance. Here, a memcapacitor offers a promising alternative. My initial work indicated that memcapacitive networks performed complex tasks with equivalent performance, compared to memristive networks, but with much higher energy efficiency.

A memcapacitor is also a two-terminal nano-device and its capacitance varies with applied voltages. Similar to a memristor, the capacitance of the memcapacitor changes with an applied voltage, similar to the function of a synapse. The memcapacitor is a storage device and does not consume static energy. Its switching energy is also small due to its small capacitance (nF to pF range). As a result, networks of memcapacitors have the potential to perform complex tasks with much higher power efficiency.

Several memcapacitive synaptic models have been proposed as artificial synapses. Pershin and Di Ventra illustrated that a memcapacitor with two diodes has the functionality of a synapse. Flak suggested that a memcapacitor behaves as a synapse when it is connected with three CMOS switches in a Cellular Nanoscale Network (CNN). Li et al. demonstrated that when four identical memcapacitors are connected in a bridge network, they characterize the function of a synapse as well.

Reservoir Computing (RC) has been used to explain higher-order cognitive functions and the interaction of short-term memory with other cognitive processes. Rigotti et al. observed that a dynamic system with short-term memory is essential in defining the internal brain states of a test agent. Although both traditional Recurrent Neural Networks (RNNs) and RC are dynamical systems, RC has a great benefit over RNNs due to the fact that the learning process of RC is simple and based on the training of the output layer. RC harnesses the computing nature of a random network of nonlinear devices, such as memcapacitors.

Appeltant et al. showed that RC with a simplified reservoir structure is sufficient to perform speech recognition. Fewer nonlinear units connecting in a delay feedback loop provide enough dynamic responses for RC. Fewer units in reservoirs mean fewer connections and inputs, and therefore lower power consumption.

As Goudarzi and Teuscher indicated, RC architectures still have inherent challenges that need to be addressed. First, theoretical studies have shown that both regular and random reservoirs achieve similar performances for particular tasks. A random reservoir, however, is more appropriate for unstructured networks of nanoscale devices. What is the role of network structure in RC for solving a task (Q1)?

Secondly, the nonlinear characteristics of nanoscale devices contribute directly to the dynamics of a physical network, which influences the overall performance of an RC system. To what degree is a mixture of nonlinear devices able to improve the performances of reservoirs (Q2)?

Thirdly, modularity, such as CMOS circuits in a digital building, is an essential key in building a complex system from fundamental blocks. Is hierarchical RCs able to solve complex tasks? What network topologies/hierarchies will lead to optimal performance? What is the learning complexity of such a system (Q3)?

My research goal is to address the above RC challenges by exploring memcapacitive reservoir architectures. The analysis of memcapacitive monolithic reservoirs addresses both questions Q1 and Q2 above by showing that Small-World Power-Law (SWPL) structure is an optimal topological structure for RCs to perform time series prediction (NARMA-10), temporal recognition (Isolate Spoken Digits), and spatial task (MNIST) with minimal power consumption. On average, the SWPL reservoirs reduce significantly the power consumption by a factor of 1.21x, 31x, and 31.2x compared to the regular, the random, and the small-world reservoirs, respectively. Further analysis of SWPL structures underlines that high locality α and low randomness β decrease the cost to the systems in terms of wiring and nanowire dissipated power but do not guarantee the optimal performance of reservoirs. With a genetic algorithm to refine network structure, SWPL reservoirs with optimal network parameters are able to achieve comparable performance with less power. Compared to the regular reservoirs, the SWPL reservoirs consume less power, by a factor of 1.3x, 1.4x, and 1.5x. Similarly, compared to the random topology, the SWPL reservoirs save power consumption by a factor of 4.8x, 1.6x, and 2.1x, respectively. The simulation results of mixed-device reservoirs (memristive and memcapacitive reservoirs) provide evidence that the combination of memristive and memcapacitive devices potentially enhances the nonlinear dynamics of reservoirs in three tasks: NARMA-10, Isolated Spoken Digits, and MNIST.

In addressing the third question (Q3), the kernel quality measurements show that hierarchical reservoirs have better dynamic responses than monolithic reservoirs. The improvement of dynamic responses allows hierarchical reservoirs to achieve comparable performance for Isolated Spoken Digit tasks but with less power consumption by a factor of 1.4x, 8.8x, 9.5, and 6.3x for delay-line, delay-line feedback, simple cycle, and random structures, respectively. Similarly, for the CIFAR-10 image tasks, hierarchical reservoirs gain higher performance with less power, by a factor of 5.6x, 4.2x, 4.8x, and 1.9x. The results suggest that hierarchical reservoirs have better dynamics than the monolithic reservoirs to solve sufficiently complex tasks.

Although the performance of deep mem-device reservoirs is low compared to the state-of-the-art deep Echo State Networks, the initial results demonstrate that deep mem-device reservoirs are able to solve a high-dimensional and complex task such as polyphonic music task. The performance of deep mem-device reservoirs can be further improved with better settings of network parameters and architectures.

My research illustrates the potentials of novel memcapacitive systems with SWPL structures that are brained-inspired and energy-efficient in performing tasks. My research offers novel memcapacitive systems that are applicable to low-power applications, such as mobile devices and the Internet of Things (IoT), and provides an initial design step to incorporate nano memcapacitive devices into future applications of nanotechnology.

Rights

In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).

Persistent Identifier

https://archives.pdx.edu/ds/psu/28999

Share

COinS