Currently, traditional computing is mainly based on the von Neumann architecture, which plays a vital role in many fields. Computers are ideal platforms for solving complex computational problems, such as performing precise numerical calculations. Due to the von Neumann bottleneck, modern computer systems typically use discrete information processing approaches and hierarchical storage, which inevitably limits computational performance efficiency. In addition, traditional computers are confronted by a sharp increase in energy consumption as the speed of calculations has increased. For example, the AlphaGo has 1202 CPUs, 176 GPUs and requires 3000 megajoules for a five-hour match, which is equivalent to 300 days’ energy intake for an adult. These problems become more serious when computers need to handle the tremendous amount of information involving interactions with the real world and imprecisely specified processing.
The human brain is equipped with about 1015 synaptic parallel connection between 1011 neurons, and it can concurrently perform information processing and storage. This has the advantages of faster data processing and lower power consumption (≈ 20 W) over traditional von Neumann computers when performing complex functions such as perception, hearing, vision, motion and so on[1-3]. Hence, the human brain system has attracted scientists around the world to research neuromorphic computing, which is currently manifested in two main approaches: software simulation and hardware implementation. However, software simulation solutions often require huge physical space and a lot of energy consumption. However, these problems can be solved if we apply massively parallel neural networks on hardware.
In hardware, artificial neural networks are constructed using electronic functional devices. The synapse is the basic unit of neuron connection, and it can perform information storage and processing functions through tuning the synaptic weight. In the development of complementary metal–oxide–semiconductor (CMOS)-based digital systems, CMOS analog circuits have been proposed to emulate the synaptic function, but they incur tremendous energy and space costs as complexity increases. Dozens of transistors are required to simulate the function of one synapse. For example, the TrueNorth chip reported by IBM integrated 5.4 billion transistors to implement 1 million neurons and 256 million non-plastic synapses with a newly designed architecture. The inefficiency of simulating the human brain with CMOS-based analog circuits is due to the latter’s digital operation. For example, the energy expenditure of a simple synaptic function, such as excitatory postsynaptic current (EPSC), simulated using CMOS could be extremely higher than the consumption of a biological synapse [4-6].
In recent years, novel electronic/ionic hybrid devices that simulate neuromorphic computing have attracted much attention because they can overcome the shortage of CMOS-based analog circuits. At present, there are two types of these devices. The first type is two-terminal devices, including resistive random access memory (ReRAM)[9-12], phase change memory devices (PCM), magnetoresistive random access memory (MRAM)[14-16], ferroelectric tunneling junctions (FTJ)[17-19] and so on. The second type are three-terminal devices, including electrolyte-gated transistors[2, 21-23] (EGTs), floating gate transistor devices, and ferroelectric field-effect transistors (FeFETs).
Synaptic devices based on EGTs have attracted considerable attention[25, 26]. In EGTs, the semiconducting channel is in contact with a gate electrode via an ionic conducting and electronic insulating electrolyte. Morphologically, the EGTs devices are the most similar to biological synapses. They provide a good choice for synaptic electronics because of their bionic structure, with physically separated write and read terminals. They can also simulate various neural functions such as short- and long-term plasticity (STP and long-term potentiation (LTP)), spike-timing dependent plasticity (STDP), spatiotemporal information processing, and classical conditioning experiments. Due to their extremely strong electric-double-layer (EDL) effect, synaptic devices based on EDL transistors (EDLTs) usually have very low power consumption. Moreover, the “write” operation of the signals can be carried out in ETGs through gate modulation, while the “read” of the signal is carried out in the channel, which can realize spatial read and write separation. This enables EGTs to integrate information transmission and storage, providing a basis for the realization of more complex neuromorphic computing capabilities.
In this review, the recent research in EGTs for neuromorphic applications is discussed and reviewed. This review is structured as follows. In Section 1, we introduce the mechanisms and types of EGTs. In Section 2, we describe the EGT-based artificial synapse structure. In Section 3, we discuss the channel materials consisting of oxide semiconductors with different active ion types for EGT-based synapses: O2– type EGTs and H+ type EGTs. In Section 4, we review EGT-based synapses with the channel consisting of 2D Materials. Meanwhile, in Section 5, we review EGT-based synapses with the channel consisting of organic semiconductors. In Section 6, we discuss the function of EGTs for neuromorphic computing and biological features imitation. Finally, we provide a conclusion and perspectives for EGTs.