Roughly seventy years ago, Alan Touring asked: “Can machines think?” Given the state of artificial intelligence (AI) today, we still cannot answer that question with absolute certainty. However, collective efforts in this field have produced many artificial hardware and software structures that resemble the human brain and perform computation in a similar way.
Among the many human brain-inspired AI architectures, neural networks and more recently deep neural networks, have the most significant results. Equipped with deep learning algorithms, computers are capable of detecting fraud, autonomously driving cars, serving as virtual assistants, managing customer relations, modelling financial investments and recognising what people are saying and how they look.
Deep neural networks are composed of artificial neurons modelled after biological neurons present in our brains. These neural networks are capable of discovering and learning from complex relations present in the training data. Given the large amount of data collected using IoT devices, advanced sensor networks, and mobile devices, deep neural networks are capable of learning almost anything that humans can.
“Deep neural networks are capable of learning almost anything that humans can.”
Nevertheless, current computation technology is limiting the large-scale application of deep neural networks. These limitations come mainly from current computation technology. Firstly, due to the economics of Moore’s Law, very few companies can fabricate silicon technologies beyond 7 nm. Secondly, current memory technologies are incapable of dealing with very large data loads that grow even faster than Moore’s Law. And finally, the increased computation power requirements have increased cooling energy demands. The overall efficiency of the computation technology is too low to sustain a large, deep neural network load.
To solve the problems of the current computing technology, research institutions and enterprises around the world are making a huge push towards the integration of nanoelectronics into computing hardware in a more innovative way. The common goal is to integrate different ways of processing information that go far beyond Von Neumann’s architecture.
“Neuromorphic chips have an ideal architecture that can support the large-scale adoption of deep neural networks and further the progress of AI.”
One of the most promising novel computation technology efforts is neuromorphic computing; next-generation computation hardware that architecturally resembles the computing structure of a human brain. Namely, neuromorphic processors are designed to have central processing and memory units together to remove the key bottleneck of Von Neumann’s architecture of requiring data exchange mechanisms between these two elements. Designed in this way, neuromorphic chips have an ideal architecture that can support the large-scale adoption of deep neural networks and further the progress of AI.
Neuromorphic computing advantages
The key limiting factor of the current computation technologies is the need to continuously move data between CPU and memory, and this is not what our brains normally do. These limitations impact both the bandwidth and our ability to efficiently train neural network models.
In a typical data analysis scenario today, we are taking human brain-inspired machine learning models and we are imposing them on a processor with Von Neumann architecture, which is very different from how our brains work. This feels out of place and poses the question, can we create a computer chip that operates similarly to our brain?
Another key limiting factor of the Von Neumann architecture is energy efficiency. Today’s computers are extremely power-hungry. According to the study published in Nature magazine, if the data and communication trends continue to increase at the current rate, by 2040 binary operations will consume over 1027 Joules of energy, which exceeds the global energy production of today.
“By mimicking the workings of the human brain, the technology intends to be as energy-efficient.”
Neuromorphic computing is an interdisciplinary field that involves material science, physics, chemistry, computer science, electronics, and system design. The concept attempts to resolve the current limitations of the Von Neumann architecture and intends to create hardware structures that resemble the human brain. Neuromorphic computing technology collocates memory and processing units. By doing so, latency and bandwidth limitations induced by moving large amounts of data between the two can be eliminated. Additionally, by mimicking the workings of the human brain, the technology intends to be as energy-efficient.
The neuromorphic approach has the potential to revolutionise computing as a whole, but its most effective application will be deep neural networks. These networks have a highly parallel model structure that requires specific distributed memory access patterns. The distributed parallelism is difficult to map efficiently onto Von Neumann architecture-based computing hardware.
Exploring the early-use cases
Hewlett Packard Enterprise is at the forefront of research and development into this tech. Government research hubs are also delving into it, including the European Union with its Human Brain Project.
Last year, the neuromorphic chip market was valued at almost $US2 billion and is expected to increase to $11.29 billion by 2027. According to Gartner, there’s a lot of interest there, considering traditional computing tech that uses legacy semiconductors will hit a digital wall in 2025. For now, though, neuromorphic chips aren’t being produced at the commercial scale of CPUs and GPUs. A hold up is that many neuromorphic processors need more advances in emerging technologies such as ReRAM or MRAM. You can find out more about those here.
“Those insights ensure businesses who work with HP will get access to the latest and most efficient storage technology solutions.”
So, there’s still a way to go before real-world applications of neuromorphic semiconductor design become commonplace. The big win comes from keeping compute and memory units together. That means the system doesn’t have to constantly move data around, says John Paul Strachan, who heads the emerging accelerators’ team in the AI Lab at Hewlett Packard Enterprise. Research into AI for enterprise, including brain-based architectures, has been carried out for several years at Hewlett Packard’s labs. Those insights ensure businesses who work with HP will get access to the latest and most efficient storage technology solutions.
How can HPE work for you?
HPE is focused at the forefront of emerging technologies and is constantly incorporating advanced tech into their next-generation products and solutions. A market leader in innovation, how are you taking advantage of HPE solutions? Contact us to find out how HPE products can help your business gain a competitive advantage and put you on the fast-track to achieving your goals.