APPROACHING THE VON NEUMANN BOTTLENECK: NEUROMORPHIC COMPUTING & BEYOND

“There are one trillion synapses in a cubic centimeter of the brain. If there is such a thing as General AI, it would probably require one trillion synapses.”

Dr. Geoffrey Hinton

Background

Digital computing in no less measure has come to compose the societal fabric in today’s world. Its transformative influence, made possible by the pervasive technological evolution and remarkable commercial success leaves no doubts about its legitimacy. However, the hardware design that most computers today are based on, has seemingly remained unchanged since the von Neumann architecture (named after the famous computer scientist and mathematician John von Neumann) and begs reform to keep up with the new technology in the continuously evolving computer age.  

In the current model, there exists a sharp demarcation between computational units and memory. Simply put, during an operation, data is designed to move from the memory to the processor, which then processes the data before transferring it back to the memory. There exists modern technology with microprocessors possessing multiple computation units and 64-bit registers but the inherent model has virtually stayed the same. Von Neumann architecture has seemed to work and survive all these decades since the conception of computational hardware, so where does the issue lie?

Approaching the von Neumann Bottleneck

Conventional computers appear to be reaching their limits with various kinds of super computers and artificial intelligence applications requiring massive computing capacity. These employ deep learning systems that have all but maxed out the hardware they operate on. To put this into perspective, a simple computer chip, the size of a button, constitutes billions of transistors. Every workaday computer in today’s world employs thousands of these button sized chips.

Moore’s law states that the numbers of transistors that can be placed on a chip will double every year, with the expense staying the same. An AI company based out of San Francisco, US called Open AI analyzed current computing trends which are used to train AI systems over the past decades. They concluded that before 2012 it had generally followed Moore’s Law, with computational power doubling every two years. However, since 2012, this exact computation has been doubling every 3.4 months. Meanwhile, memory performance has struggled to proportionately increase, lagging behind the processor performance. Even squeezing millions of micro components into even smaller chips shall raise the cost of the same by record high factors.

Further due to the separation of memory and computing in von Neumann architecture, a large part of the energy consumption gets used up in the delayed transfer of information between memory and computing parts. Thereby, with increased expenditure, limitation in physical hardware, and delays in computing, we seem to be approaching what has been termed as the von Neumann bottleneck. This “von Neumann” bottleneck limits the future development of revolutionary computational systems and overall performance improvements. This also prevents us from realizing a general level of artificial intelligence.

Whilst a slowdown of Moore’s Law is being felt world over by experts, scientists have been proffering insights into the brain’s behavior that provide inspiration for novel computing solutions more than ever before. Replicating the brain’s reach into computing not only betters computational capacities but also widens applicability of future and current AI. This is where neuromorphic computing steps in.

Neuromorphic Computing: A Panacea?

One of the most favoured approaches to the problem posed by von Neumann’s bottleneck is inspired by biological principles. Our brains distribute computation and memory amongst billions of single processing units called neurons, which are interconnected with hundreds of thousands of connections called synapses. What makes the brain’s wide processing possible is the fact that there’s no specific memory or central computational unit element. Biological brains are vastly parallelized and require an iota of the energy that conventional computational technologies (that perform computations in a serial and time taking manner) employ.

In employing such principles to computational technologies, the redundancy linked with data traffic can be wholly evaded subject to computational operations and data storage being carried out locally together in the memory itself. Unlike conventional architecture in computers, this energy efficient biologically-inspired approach is known as in-memory computing. It is expected to mitigate issues of computational complexity and memory thrashing with rapid execution, and an innate capability to learn. Capitalising on increased understanding of the human brain, technologies that circumvented the problem of von Neumann’s architecture were fashioned to result in computational principles termed as neuromorphic computing.

Carver Mead, of California Institute of Technology (Caltech) was one of the former researchers who highlighted the extraordinary stinginess of energy usage in biological computing in his visionary paper written in 1990, and coined the term “neuromorphic”. Neuromorphic computing is expected to provide for a tool that understands the dynamic processes of development and learning in the brain and entail this inspiration into cognitive computing. Since, operations in brain are performed asynchronously, in a parallel fashion with memory and processing taking place locally in the neurons and synapses, neuromorphic architecture is expected to imitate the same.

von Neumann v. Neuromorphic Computing

Neuromorphic computational models will allow computers to carry out complex operations faster, in an energy efficient manner, with fewer delays than conventional von Neumann architectures. Neuromorphic chips mimic human brains with interconnected artificial neurons and synapses. Neuromorphic architecture has come to define next-generation AI which constitutes the creation and use of neural networks as analogous electronic circuits, representing innovative non-Turing computational principles. These principles intend to imbibe and reproduce facets of continuing dynamics and computational functionality found in biological brains.

Even though the current scenario seems very dubious in terms of making the switch to neuromorphic computation from the existing von Neumann architecture in the foreseeable future, recent advancement in the development of artificial intelligence technologies with the help of deep learning and algorithms has resulted in an unprecedented exceptional revolution in neuromorphic mechanisms. With the advent of the same, the focus is shifting from paralleling the stimulation of the brain in precise detail to applying the primary organizing principle to practical devices. One recent example is the, “spike”, which are short pulses that carry information between biological neurons that have emerged as the prospective forerunners in the race of neuromorphic technologies. Spikes from hundreds of neurons are transmitted, via synapses, as inputs for another neuron, which coalesce the information to compute and fire off a spike to neurons to which it is connected.

Conclusion

The term artificial general intelligence (AGI) refers to AI that demonstrates intelligence equivalent or congruous to that of humans. Though machines have not yet successfully reached this level of intelligence, neuromorphic computing proffers promising and novel opportunities in transforming the same into a reality. The growing trend of computational heterogeneity and a steady shift towards a data-centric approach calls for more specialized non-von Neumann platforms since the impact of the same has been expected to be colossal. The same can already be observed with application in an array of avenues like speech and image recognition, autonomous vehicles and robotics, medical devices, Internet of Things (IoT), and even artificial body parts.

About Pragya Sharma 2 Articles
Pragya Sharma is a final year law student at the Institute of Law, Nirma University. She's specializing in Intellectual Property and has an interest in Telecom, Media and Technology Law with a focus on Data Privacy Laws.

2 Comments

Leave a Reply

Your email address will not be published.


*