Technical Report • Neuromorphic Computing • 2026

Neuromorphic
Computing

Brain-Inspired AI Hardware for the Post-GPU Era

As AI energy consumption reaches unsustainable levels—training GPT-4 consumes electricity equivalent to 150 households for a year—a radically different computing paradigm is emerging. Neuromorphic chips, modeled after the brain's neural architecture, promise orders-of-magnitude improvements in energy efficiency and latency. From Intel's 1.15-billion-neuron Hala Point to IBM's NorthPole achieving 46× faster inference than GPUs, silicon brains are becoming reality.

1.15B
Neurons
Intel Hala Point
100×
Energy Savings
vs GPUs (Loihi 2)
~20W
Brain Power
Human Brain
$20B
Market 2030
Grand View Research
Brain-Inspired Silicon Architecture
From Neurons to Transistors
Brain to Computing
86B
Human Neurons
20W
Brain Power
10¹⁶
Ops per Second
Power Consumption Comparison
400W GPU (A100) 100W CPU 1-10W Neuromorphic ~20W Human Brain
Technology Evolution
2014 TrueNorth 2018 Loihi 2024 Hala Point 2030 Commercial
Spiking Neural Network Architecture
Input Hidden Output 100-1000× More Efficient 10-100× Lower Latency µW-mW Power Range

Spiking Neural Networks

Event-driven computing where neurons fire discrete spikes, enabling ultra-low power operation

Hardware Platforms

Intel Loihi 2, IBM NorthPole, BrainChip Akida leading commercial development

Memristors

Resistive memory devices that emulate synaptic plasticity and enable on-chip learning

Edge AI Revolution

Autonomous vehicles, robotics, IoT sensors with real-time, low-power intelligence

01

The von Neumann Bottleneck

Modern AI's insatiable appetite for energy has created an existential crisis. Training large language models consumes megawatt-hours of electricity, while the human brain—capable of far more sophisticated cognition—operates on roughly 20 watts. Neuromorphic computing aims to bridge this efficiency gap by fundamentally reimagining how silicon processes information.

Since John von Neumann's 1945 proposal for stored-program computers, virtually all computing architectures have followed the same fundamental design: a central processing unit separated from memory, with data shuttling back and forth between them. This separation creates what computer architects call the "von Neumann bottleneck"—the constant data movement that consumes the vast majority of energy in modern processors. For traditional computing tasks, this architecture has served remarkably well, enabling the exponential performance improvements described by Moore's Law.

However, artificial intelligence workloads have exposed the fundamental mismatch between von Neumann architectures and the computational patterns required for neural networks. Deep learning involves massive matrix multiplications with billions of parameters, requiring enormous data movement between processors and memory. A typical GPU spends more energy moving data than performing actual computations, resulting in power consumption measured in hundreds of watts.

The Energy Crisis

The scale of AI's energy problem has become impossible to ignore. Training GPT-4 reportedly consumed electricity equivalent to powering 50-150 households for an entire year. AI workloads currently comprise approximately 5-15% of data-center electricity consumption. By 2030, AI is projected to represent 35-50% of data-center power demand, potentially reaching 1-2% of global electricity as data-center consumption accelerates. These trends are driving urgent research into alternative computing paradigms.

The human brain, by contrast, performs approximately 1016 operations per second while consuming only about 20 watts—a feat no artificial system has come close to matching. This extraordinary efficiency emerges from the brain's radically different architecture: computation and memory are co-located in the same substrate, information is processed in parallel across billions of neurons, and activity is sparse and event-driven rather than continuous.

A Brain-Inspired Solution

Neuromorphic computing represents a fundamental departure from von Neumann principles. Rather than separating memory and processing, neuromorphic chips distribute both throughout the substrate, eliminating the data movement bottleneck. Instead of synchronous clock-driven computation, neuromorphic systems operate asynchronously, with artificial neurons communicating through discrete events called "spikes" only when relevant information needs to be transmitted.

The concept dates back to Carver Mead's pioneering work at Caltech in the late 1980s, but recent advances in semiconductor technology have finally enabled practical implementations. Today's neuromorphic processors can simulate millions of neurons with billions of synapses, achieving orders-of-magnitude improvements in energy efficiency for appropriate workloads.

Energy per Operation
GPU 10⁻¹² J CPU 10⁻¹¹ J Neuromorphic 10⁻¹⁵ J Brain Synapse 10⁻¹⁵ J
Von Neumann Bottleneck
Etotal = Ecompute + Ememory
In conventional systems: Ememory ≫ Ecompute
Data movement consumes 100-1000× more energy than actual computation.

The Fundamental Shift

Traditional computing: data moves to processors. Neuromorphic computing: processing happens where data lives. By eliminating the memory-processor bottleneck and adopting event-driven, sparse computation, neuromorphic systems can achieve 100-1000× energy efficiency improvements for AI inference tasks—potentially enabling always-on intelligence in battery-powered devices.

02

Spiking Neural Networks

The algorithmic foundation of neuromorphic computing lies in spiking neural networks (SNNs)—the "third generation" of neural networks that process information through discrete, temporally precise events rather than continuous values. This biological fidelity enables extraordinary efficiency but requires rethinking how we train and deploy AI systems.

Unlike conventional artificial neural networks where neurons output continuous activation values, spiking neural networks communicate through binary "spikes"—brief voltage pulses that occur when a neuron's internal state crosses a threshold. This event-driven paradigm means computation only occurs when relevant information needs to be processed, dramatically reducing energy consumption for sparse, temporal data streams.

The most common neuron model in SNNs is the Leaky Integrate-and-Fire (LIF) neuron. Input spikes are integrated over time, with the membrane potential gradually decaying ("leaking") in the absence of input. When the accumulated potential exceeds a threshold, the neuron fires a spike and resets. This simple model captures essential neural dynamics while remaining computationally tractable.

Information Encoding

SNNs can encode information in multiple ways. Rate coding represents values through spike frequency—more spikes per unit time indicate higher activation. Temporal coding uses precise spike timing to convey information, enabling higher bandwidth in single spikes. Rank-order coding encodes importance through the sequence of spike arrivals. Each scheme offers different tradeoffs between bandwidth, precision, and biological plausibility.

For processing temporal data—video streams, audio signals, sensor readings—SNNs offer natural advantages. Rather than accumulating data into fixed-size batches as GPUs require, SNNs can process individual events as they arrive, enabling true real-time operation with minimal latency. This is particularly valuable for robotics, autonomous vehicles, and other applications requiring rapid response.

Training Challenges

Training SNNs presents unique challenges. The spike function is non-differentiable—it outputs either 0 or 1 with an infinitely steep transition—making standard backpropagation inapplicable. Two main approaches have emerged to address this. ANN-to-SNN conversion trains a conventional network first, then converts it to spiking form. Surrogate gradient methods replace the spike function's derivative with a smooth approximation during training.

Biologically plausible learning rules like Spike-Timing-Dependent Plasticity (STDP) offer an alternative path. In STDP, synaptic strength increases when a presynaptic spike precedes a postsynaptic spike and decreases for the reverse ordering. This local, unsupervised learning rule enables on-chip adaptation without requiring gradient computation, though achieving competitive accuracy on complex tasks remains challenging.

τm dV/dt = -(V - Vrest) + R·I(t)
Leaky Integrate-and-Fire dynamics: membrane potential V evolves based on input current I and leakage
SNN Advantages

Energy efficiency: Compute only when spikes occur

Temporal processing: Native handling of time-series data

Low latency: Event-by-event processing, no batching

On-chip learning: STDP enables local adaptation

Current Limitations

Training difficulty: Non-differentiable spike function

Accuracy gap: 1-2% behind ANNs on benchmarks

Software ecosystem: Limited tools vs TensorFlow/PyTorch

LLM support: No neuromorphic transformer yet

03

The Hardware Landscape

From research prototypes to commercial products, neuromorphic hardware has matured rapidly. Intel's massive Hala Point system demonstrates scaling potential, IBM's NorthPole achieves remarkable efficiency gains, and BrainChip's Akida brings neuromorphic processing to edge devices. Each platform represents a distinct approach to implementing brain-inspired computation in silicon.

Intel Labs

Loihi 2 / Hala Point

Neurons: 1M per chip
Synapses: 120M per chip
Process: Intel 4 (7nm)
Cores: 128 neuromorphic
System: 1,152 chips
Total: 1.15B neurons

True SNN architecture with programmable neuron dynamics, STDP learning, asynchronous operation. 100× energy savings, 50× faster than CPU/GPU for sparse workloads.

IBM Research

NorthPole

Cores: 256 digital
Transistors: 22 billion
Process: 12nm
Memory: 224MB on-chip
Precision: 2/4/8-bit
Peak: 800 TOPS (2-bit)

Compute-in-memory architecture eliminating von Neumann bottleneck. 25× more efficient than GPUs, 46× faster LLM inference. No off-chip memory access.

BrainChip

Akida 2

Type: Event-based NPU
Precision: 8-bit
Power: milliwatts
Learning: On-chip
Networks: CNN, ViT, TENNs
Status: Commercial

First commercial neuromorphic processor. Sparse event-driven operation, one-shot learning, edge AI focus. Supports Vision Transformers and temporal models.

SynSense / Zhejiang-Alibaba

Speck / Darwin3

Speck: Event camera + SNN
Darwin3: 4,096 cores
Power: <1mW typical
Latency: <1ms
Focus: Vision, audio
Market: China domestic

Ultra-low-power edge inference. Speck integrates DVS sensor with SNN processor. Darwin3 competitive with Intel/IBM on neural scale.

Intel Hala Point: World's Largest

Deployed at Sandia National Laboratories in 2024, Hala Point represents the most ambitious neuromorphic system yet constructed. Packing 1,152 Loihi 2 processors into a microwave-oven-sized chassis, it achieves 1.15 billion neurons and 128 billion synapses across 140,544 neuromorphic cores. The system can perform 20 petaops while achieving efficiency up to 15 TOPS/W—without requiring the batch processing that introduces latency in GPU systems.

Loihi 2's programmable neuron cores support arbitrary spiking dynamics defined in microcode, enabling researchers to implement diverse neuron models beyond standard LIF. The chip's three asynchronous networks-on-chip enable efficient communication: one for spike routing, one for weight distribution, and one for program loading. This flexibility makes Hala Point a research platform for exploring novel algorithms.

IBM NorthPole: Breaking Boundaries

NorthPole takes a different approach, optimizing for mainstream deep learning inference rather than biological spiking models. By intertwining compute with memory on-chip—256 cores each with co-located SRAM—NorthPole eliminates off-chip memory access entirely. The result is remarkable: in late 2024, IBM demonstrated a 3-billion-parameter LLM running at under 1ms per token, 46.9× faster than comparable GPUs while achieving 72.7× better energy efficiency.

While NorthPole cannot yet run GPT-4 scale models due to memory constraints, its architecture points toward a future where specialized inference chips dramatically reduce AI's energy footprint. IBM fabricated NorthPole on a 12nm process; moving to advanced nodes could yield further substantial gains.

Performance Benchmarks
Inference Speed (ResNet-50) Loihi 2 NorthPole A100 GPU *Higher = Faster inference per watt
Key Specifications
1.15B
Neurons (Hala)
256
Cores (NorthPole)
46.9×
Faster LLM
72.7×
Energy Savings
04

Memristors & Artificial Synapses

Beyond digital neuromorphic chips, emerging memory devices called memristors promise even more brain-like computation. These resistive elements "remember" past activity through physical changes in their structure—much like biological synapses strengthen or weaken based on experience. Recent breakthroughs at USC have created artificial neurons that physically emulate electrochemical neural dynamics.

The memristor—a portmanteau of "memory" and "resistor"—was theorized by Leon Chua in 1971 and first physically demonstrated by HP Labs in 2008. Unlike conventional resistors with fixed resistance, memristors change their resistance based on the history of current flow through them. This memory property makes them natural candidates for implementing synaptic weights that persist without power and can be modified through learning.

Memristors typically consist of a thin oxide layer sandwiched between two electrodes. Applying voltage drives ions (often oxygen vacancies or metal ions) through the oxide, creating or destroying conductive filaments that determine resistance. The device retains its state when power is removed, enabling non-volatile storage of synaptic weights. Crucially, the resistance change is gradual and analog, allowing memristors to store multiple bits per device.

The USC Breakthrough

In late 2025, researchers at USC's Center of Excellence on Neuromorphic Computing unveiled artificial neurons that physically replicate biological electrochemical dynamics—not merely simulate them digitally. Led by Professor Joshua Yang, the team developed "diffusive memristors" using silver ions diffusing through an oxide layer, mimicking how calcium ions trigger biological neural activity.

Each artificial neuron fits within the footprint of a single transistor, compared to tens or hundreds of components in conventional designs. The devices demonstrate refractory periods—the brief interval after firing when a neuron cannot fire again—matching biological behavior. This suggests potential for chips that could reduce both size and energy consumption by orders of magnitude while enabling true hardware learning.

Synaptic Plasticity in Silicon

Biological learning occurs through synaptic plasticity—the strengthening and weakening of connections between neurons. Memristors naturally implement several key plasticity mechanisms. Long-Term Potentiation (LTP) and Long-Term Depression (LTD) are mimicked through gradual conductance increases and decreases. Spike-Timing-Dependent Plasticity (STDP) emerges from the temporal dynamics of ion movement.

Recent work has demonstrated memristors implementing six distinct synaptic functions in a single device, enabling bio-inspired deep neural networks capable of complex tasks like playing Atari games through reinforcement learning. The key advantage: learning happens directly in hardware through physical processes, eliminating the need for energy-intensive gradient computation in separate training accelerators.

1015
Synapses in Brain

Memristors enable similar density through 3D crossbar arrays

fJ
Energy per Operation

Femtojoule switching approaches biological efficiency

6+
Synaptic Functions

Single memristor mimics multiple plasticity mechanisms

Crossbar Arrays: Analog Matrix Multiply

Memristors arranged in crossbar arrays can perform matrix-vector multiplication—the core operation of neural networks—in a single step using Ohm's law and Kirchhoff's current law. Input voltages applied to rows produce output currents at columns proportional to the stored conductances, achieving O(1) time complexity versus O(n²) for digital implementations. This "compute-in-memory" approach could enable AI inference orders of magnitude faster and more efficient than digital approaches.

05

Edge AI & Real-World Applications

Neuromorphic computing's true value emerges at the edge—in battery-powered devices, autonomous systems, and always-on sensors where energy efficiency and low latency matter most. From warehouse robots making split-second decisions to prosthetic limbs that feel, neuromorphic chips are enabling applications impossible with conventional AI hardware.

The mismatch between AI's computational demands and edge device constraints has created an enormous opportunity for neuromorphic solutions. Smartphones, wearables, drones, and IoT sensors must operate on limited battery power while delivering real-time intelligence. Cloud-based AI introduces unacceptable latency for applications like autonomous navigation, while continuous data upload raises privacy concerns and network costs.

Neuromorphic chips address these constraints directly. By processing only relevant events and eliminating continuous clock-driven computation, they can operate on milliwatts—enabling years of battery life rather than hours. Intel's Loihi has demonstrated keyword spotting with 200× lower energy than embedded GPUs, while achieving 10× lower latency. For always-on voice assistants or continuous health monitoring, these gains translate to practical deployability.

Autonomous Systems

Autonomous vehicles and robots require processing multiple sensor streams—cameras, lidar, radar, IMUs—with ultra-low latency to navigate safely. Neuromorphic systems excel at sensor fusion, processing asynchronous data streams as events arrive rather than waiting for synchronized frames. Event cameras (dynamic vision sensors) paired with neuromorphic processors can track motion with microsecond precision while consuming minimal power.

Mercedes-Benz recently spun off its Silicon Valley team into Athos Silicon specifically to develop next-generation automotive neuromorphic chips. QuEra and other companies are exploring neuromorphic processing for drone navigation and industrial robotics, where real-time response to environmental changes is critical. The combination of low latency, energy efficiency, and robustness to noisy sensor data makes neuromorphic systems ideal for these applications.

Healthcare & Prosthetics

Wearable medical devices require continuous monitoring without frequent recharging. Neuromorphic processors can analyze ECG signals for arrhythmia detection, monitor vital signs for early anomaly warning, and process neural signals for brain-computer interfaces—all while operating on harvested energy or tiny batteries. BrainChip's Akida is being evaluated for always-on health monitoring applications.

Adaptive prosthetics represent a compelling frontier. Neuromorphic systems can process neural signals from residual limbs in real-time, enabling more intuitive control of artificial hands and arms. The low latency and power consumption make long-term implanted devices feasible, while on-chip learning could allow prosthetics to adapt to individual users over time.

Edge AI Power Budget
Neuromorphic enables new form factors: Smart Dust Sensors µW Wearables mW Drones/Robots 100mW-1W
Latency Comparison
Response Time (ms) <1 Neuro 5-10 GPU 50-200 Cloud
Application Domain Key Requirements Neuromorphic Advantage Status
Autonomous Vehicles Low latency, sensor fusion Event-driven processing, <1ms response Active R&D
Industrial Robotics Real-time adaptation, safety Continuous learning, robust to noise Pilot deployments
Wearable Health Multi-year battery, always-on µW power, event-triggered analysis Commercial
Smart Home / IoT Energy harvest, privacy On-device processing, no cloud Commercial
Cybersecurity Anomaly detection, real-time Pattern recognition in noisy data Active R&D
Satellite / Aerospace Radiation tolerance, power limits Low power, fault tolerance Early research
06

Market Trajectory & Future Outlook

The neuromorphic computing market stands at an inflection point. Grand View Research projects growth to $20.27 billion by 2030 at 19.9% CAGR, driven by AI's energy crisis and edge computing demands. Hybrid architectures combining neuromorphic accelerators with conventional processors are emerging as the near-term path to deployment, while fundamental advances in materials and algorithms promise transformative long-term impact.

Investment in neuromorphic computing has accelerated dramatically. BrainChip raised $25 million in late 2025 to commercialize its Akida 2 platform, while China's Made in China 2025 initiative has allocated $10 billion for AI chip research, including significant neuromorphic efforts through institutions like Zhejiang University and companies like SynSense. Intel, IBM, and Samsung continue significant R&D investment despite the technology's pre-commercial status for many applications.

The competitive landscape spans established semiconductor giants and innovative startups. Intel's Neuromorphic Research Community includes over 200 academic and industry partners exploring applications from telecommunications (Ericsson) to autonomous systems. BrainChip has partnered with Edge Impulse to democratize neuromorphic development, while SynSense focuses on ultra-low-power vision applications. Each player is carving distinct market positions.

Hybrid Architectures

The near-term future likely involves hybrid systems where neuromorphic processors serve as specialized accelerators alongside conventional CPUs and GPUs. For workloads with temporal, sparse, or event-driven characteristics, computation offloads to neuromorphic chips; for dense matrix operations, GPUs remain optimal. This heterogeneous approach maximizes efficiency across diverse AI workloads while leveraging existing software ecosystems.

Intel's Falcon Shores platform exemplifies this vision, combining neuromorphic processing with traditional AI accelerators. AWS and NVIDIA's "AI Factories" initiative similarly envisions diverse specialized accelerators working in concert. The key challenge is software: developing programming models and compilers that can intelligently partition workloads across heterogeneous compute fabrics.

The Path to Scale

Scaling neuromorphic systems to brain-like complexity remains a grand challenge. The human brain contains roughly 86 billion neurons and 100 trillion synapses; Intel's Hala Point, the largest current system, reaches 0.001% of this scale. Reaching biological density will require advances in 3D integration, novel memory technologies, and perhaps fundamentally new materials beyond silicon.

Researchers envision convergence with other emerging technologies. Optical neuromorphic computing could achieve terahertz bandwidths for neural communication. Quantum-neuromorphic hybrids might combine quantum speedups with neuromorphic efficiency. 2D materials like graphene could enable unprecedented device density. The field remains far from fundamental physical limits, suggesting transformative breakthroughs may yet come.

2024
Hala Point deployed — Intel's 1.15B neuron system at Sandia. IBM NorthPole demonstrates 46× LLM inference speedup. BrainChip Akida 2 goes commercial.
2025
USC diffusive memristor breakthrough — Artificial neurons physically emulate brain chemistry. Akida Cloud launches. China's BIE-1 achieves 90% power savings.
2025-27
Hybrid integration — Neuromorphic accelerators integrated with CPU/GPU in commercial products. Software ecosystem matures (Lava, MetaTF, PyNN).
2028-30
Mainstream edge AI — Neuromorphic processors standard in autonomous vehicles, robotics, wearables. Market reaches $20B. Novel materials enter production.
2030+
Brain-scale systems — 100B+ neuron systems approach biological complexity. Potential pathway toward AGI through neuromorphic architectures.

The Efficiency Imperative

As AI scales toward ever-larger models, the industry faces a choice: continue exponential energy growth, or fundamentally rethink computing architecture. Neuromorphic systems offer the only known path to brain-like efficiency. Whether through digital SNNs, memristive crossbars, or hybrid approaches, the principles of co-located memory-compute and event-driven processing will shape computing's future.

07

Challenges & Open Questions

Despite remarkable progress, neuromorphic computing faces substantial challenges before widespread adoption. The software ecosystem remains immature compared to GPU-based deep learning, training algorithms for SNNs have not yet matched conventional network accuracy at scale, and perhaps most critically, no one has demonstrated how to run large language models efficiently on neuromorphic hardware.

The software challenge may prove more formidable than hardware. GPU-based deep learning benefits from decades of optimization in frameworks like TensorFlow and PyTorch, extensive model zoos, and a massive developer community. Neuromorphic equivalents—Intel's Lava, BrainChip's MetaTF, the community-driven PyNN—are far less mature. Developers face steep learning curves and limited documentation, slowing adoption even where hardware advantages exist.

Programming models for neuromorphic systems also differ fundamentally from conventional approaches. Rather than specifying layer-by-layer feedforward computation, developers must think in terms of spike timing, temporal dynamics, and event-driven processing. This paradigm shift requires new skills and intuitions, creating a talent gap that industry training programs are only beginning to address.

The LLM Challenge

Large language models represent AI's current frontier, yet no neuromorphic system can run models like GPT-4. Intel's Mike Davies has acknowledged: "The neuromorphic research field does not have a neuromorphic version of the transformer." The attention mechanism central to transformers involves dense matrix operations across entire sequences—the opposite of the sparse, local computation where neuromorphic systems excel.

Research into neuromorphic transformers is active but early. Potential approaches include sparse attention patterns, chunked sequence processing across neuromorphic cores, and hybrid architectures where transformers handle global attention while neuromorphic processors manage local feature extraction. Whether any approach can match GPU transformer performance remains uncertain.

Training at Scale

Training large SNNs remains significantly more difficult than training equivalent conventional networks. Surrogate gradient methods have narrowed the accuracy gap to 1-2% on benchmark tasks, but this gap persists even as network size increases. More fundamentally, training typically still occurs on GPUs, with models converted to spiking form for inference—negating potential training efficiency gains.

On-chip learning through STDP and related rules offers an alternative but has not yet achieved competitive performance on complex tasks. The dream of continuously learning systems that adapt to new data without explicit retraining remains elusive. Bridging this gap likely requires advances in both hardware (precise analog weight updates) and algorithms (credit assignment across time in spiking networks).

Software Ecosystem Maturity
Libraries & Community Size PyTorch/TensorFlow Intel Lava MetaTF/PyNN
SNN Accuracy Gap
AccSNN ≈ AccANN - 1-2%
Challenge: Surrogate gradients have narrowed but not closed the gap.
Goal: Match ANN accuracy while maintaining energy gains.
What's Working

Hardware efficiency: 100× gains demonstrated

Edge inference: Commercial products shipping

Sensor processing: Event cameras + SNNs excel

Research momentum: 200+ groups in Intel INRC

Investment: $20B market projection by 2030

What Needs Work

Software ecosystem: Far behind GPU frameworks

LLM support: No neuromorphic transformer yet

Training efficiency: Still relies on GPU training

Accuracy gap: 1-2% behind ANNs on benchmarks

Standardization: Fragmented hardware platforms

The Open Question: Path to AGI?

Some researchers believe neuromorphic computing could provide a more direct path toward artificial general intelligence than scaling conventional neural networks. The argument: biological brains achieve general intelligence through neuromorphic principles, so brain-inspired hardware might naturally support AGI capabilities. Others counter that algorithmic advances matter more than substrate, and GPUs can simulate any neuromorphic computation with enough power. The debate remains open—and may shape computing's trajectory for decades.

References

[1]Modha, D.S. et al. Neural inference at the frontier of energy, space, and time. Science 382, 329-335 (2023). DOI: 10.1126/science.adh1174
[2]Davies, M. et al. Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. IEEE Micro 38, 82-99 (2018). DOI: 10.1109/MM.2018.112130359
[3]Neftci, E.O., Mostafa, H. & Zenke, F. Surrogate Gradient Learning in Spiking Neural Networks. IEEE Signal Processing Magazine 36, 51-63 (2019). DOI: 10.1109/MSP.2019.2931595
[4]Yang, J.J. et al. Memristive devices for computing. Nature Nanotechnology 8, 13-24 (2013). DOI: 10.1038/nnano.2012.240
[5]Merolla, P.A. et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science 345, 668-673 (2014). DOI: 10.1126/science.1254642
[6]Roy, K., Jaiswal, A. & Panda, P. Towards spike-based machine intelligence with neuromorphic computing. Nature 575, 607-617 (2019). DOI: 10.1038/s41586-019-1677-2
[7]Kudithipudi, D. et al. Biological underpinnings for lifelong learning machines. Nature Machine Intelligence 4, 196-210 (2022). DOI: 10.1038/s42256-022-00452-0
[8]Theilman, B.H. & Aimone, J.B. Solving sparse finite element problems on neuromorphic hardware. Nature Machine Intelligence (2025). DOI: 10.1038/s42256-025-01143-2
[9]Yang, J.J. et al. Integrated artificial neurons using diffusive memristors. Nature Electronics (2025). USC Viterbi School of Engineering
[10]Bi, G.Q. & Poo, M.M. Synaptic Modifications in Cultured Hippocampal Neurons. J. Neurosci. 18, 10464-10472 (1998). DOI: 10.1523/JNEUROSCI.18-24-10464
[11]Mead, C. Neuromorphic electronic systems. Proc. IEEE 78, 1629-1636 (1990). DOI: 10.1109/5.58356
[12]Furber, S.B. et al. The SpiNNaker Project. Proc. IEEE 102, 652-665 (2014). DOI: 10.1109/JPROC.2014.2304638
[13]Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61-64 (2015). DOI: 10.1038/nature14441
[14]Li, Y. et al. Single neuromorphic memristor closely emulates multiple synaptic mechanisms. Nature Communications (2024). DOI: 10.1038/s41467-024-51093-3
[15]Intel Corporation. Hala Point: World's Largest Neuromorphic System. Intel Newsroom (April 2024). newsroom.intel.com
[16]IBM Research. NorthPole achieves new speed and efficiency milestones. IBM Research Blog (October 2024). research.ibm.com/blog
[17]BrainChip Holdings. Akida 2 Technology Platform. BrainChip (2025). brainchip.com/technology
[18]Guo, Y. et al. Recent Advances in Neuromorphic Devices Based on 2D Materials. Adv. Mater. Technologies (2025). DOI: 10.1002/admt.202501408
[19]Grand View Research. Neuromorphic Computing Market Size Report, 2030. Market Analysis (2024). grandviewresearch.com
[20]Chowdhury, S. et al. Neuromorphic computing for robotic vision: algorithms to hardware advances. Communications Engineering (2025). DOI: 10.1038/s44172-025-00492-5
Samarjith Biswas, PhD
Research Scientist • University of Arizona
samarjithbiswas.com
© 2026