
[1]
Backus J. Can programming be liberated from the von Neumann style. Commun ACM, 1978, 21, 613

[2]
Moore G. Moore’s law. Electron Magaz, 1965, 38, 114

[3]
Schaller R R. Moore's law: Past, present and future. IEEE Spectr, 1997, 34, 52

[4]
Mack C A. Fifty years of Moore's law. IEEE Trans Semicond Manufact, 2011, 24, 202

[5]
Waldrop M M. The chips are down for Moore's law. Nature, 2016, 530, 144

[6]
Wulf W A, McKee S A. Hitting the memory wall. SIGARCH Comput Archit News, 1995, 23, 20

[7]
Ielmini D, Wong H S P. Inmemory computing with resistive switching devices. Nat Electron, 2018, 1, 333

[8]
le Gallo M, Sebastian A, Mathis R, et al. Mixedprecision inmemory computing. Nat Electron, 2018, 1, 246

[9]
Kendall J D, Kumar S. The building blocks of a braininspired computer. Appl Phys Rev, 2020, 7, 011305

[10]
Sebastian A, Le Gallo M, KhaddamAljameh R, et al. Memory devices and applications for inmemory computing. Nat Nanotechnol, 2020, 15, 529

[11]
Lee S H, Zhu X J, Lu W D. Nanoscale resistive switching devices for memory and computing applications. Nano Res, 2020, 13, 1228

[12]
Upadhyay N K, Jiang H, Wang Z R, et al. Emerging memory devices for neuromorphic computing. Adv Mater Technol, 2019, 4, 1800589

[13]
Islam R, Li H T, Chen P Y, et al. Device and materials requirements for neuromorphic computing. J Phys D, 2019, 52, 113001

[14]
Krestinskaya O, James A P, Chua L O. Neuromemristive circuits for edge computing: A review. IEEE Trans Neural Netw Learn Syst, 2020, 31, 4

[15]
Rajendran B, Sebastian A, Schmuker M, et al. Lowpower neuromorphic hardware for signal processing applications: A review of architectural and systemlevel design approaches. IEEE Signal Process Mag, 2019, 36, 97

[16]
Singh G, Chelini L, Corda S, et al. A review of nearmemory computing architectures: Opportunities and challenges. 2018 21st Euromicro Conference on Digital System Design (DSD), 2018, 608

[17]
Singh G, Chelini L, Corda S, et al. Nearmemory computing: Past, present, and future. Microprocess Microsyst, 2019, 71, 102868

[18]
Merolla P A, Arthur J V, AlvarezIcaza R, et al. A million spikingneuron integrated circuit with a scalable communication network and interface. Science, 2014, 345, 668

[19]
Chen Y J, Luo T, Liu S L, et al. DaDianNao: A machinelearning supercomputer. 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture, 2014, 609

[20]
Davies M, Srinivasa N, Lin T H, et al. Loihi: A neuromorphic manycore processor with onchip learning. IEEE Micro, 2018, 38, 82

[21]
Pei J, Deng L, Song S, et al. Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature, 2019, 572, 106

[22]
Chua L. Memristor – The missing circuit element. IEEE Trans Circuit Theory, 1971, 18, 507

[23]
Wong H S P, Raoux S, Kim S, et al. Phase change memory. Proc IEEE, 2010, 98, 2201

[24]
Paz de Araujo C A, McMillan L D, Melnick B M, et al. Ferroelectric memories. Ferroelectrics, 1990, 104, 241

[25]
Apalkov D, Khvalkovskiy A, Watts S, et al. Spintransfer torque magnetic random access memory (STTMRAM). J Emerg Technol Comput Syst, 2013, 9, 1

[26]
Wang Z R, Wu H Q, Burr G W, et al. Resistive switching materials for information processing. Nat Rev Mater, 2020, 5, 173

[27]
Lanza M, Wong H S P, Pop E, et al. Recommended methods to study resistive switching devices. Adv Electron Mater, 2019, 5, 1800143

[28]
Waser R, Dittmann R, Staikov G, et al. Redoxbased resistive switching memories–nanoionic mechanisms, prospects, and challenges. Adv Mater, 2009, 21, 2632

[29]
Pi S, Li C, Jiang H, et al. Memristor crossbar arrays with 6nm halfpitch and 2nm critical dimension. Nat Nanotechnol, 2019, 14, 35

[30]
Choi B J, Torrezan A C, Strachan J P, et al. Highspeed and lowenergy nitride memristors. Adv Funct Mater, 2016, 26, 5290

[31]
Lin P, Li C, Wang Z, et al. Threedimensional memristor circuits as complex neural networks. Nat Electron, 2020, 3, 225

[32]
Jo S H, Chang T, Ebong I, et al. Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett, 2010, 10, 1297 613

[33]
Abdelgawad A, Bayoumi M. High speed and areaefficient multiply accumulate (MAC) unit for digital signal prossing applications. 2007 IEEE International Symposium on Circuits and Systems, 2007, 3199

[34]
Pawar R, Shriramwar D S S. Review on multiplyaccumulate unit. Int J Eng Res Appl, 2017, 7, 09

[35]
Tung C W, Huang S H. A highperformance multiplyaccumulate unit by integrating additions and accumulations into partial product reduction process. IEEE Access, 2020, 8, 87367

[36]
Zhang H, He J R, Ko S B. Efficient posit multiplyaccumulate unit generator for deep learning applications. 2019 IEEE International Symposium on Circuits and Systems (ISCAS), 2019, 1

[37]
Camus V, Mei L Y, Enz C, et al. Review and benchmarking of precisionscalable multiplyaccumulate unit architectures for embedded neuralnetwork processing. IEEE J Emerg Sel Topics Circuits Syst, 2019, 9, 697

[38]
Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. Commun ACM, 2017, 60, 84

[39]
Hu M, Strachan J P, Li Z Y, et al. Dotproduct engine for neuromorphic computing: Programming 1T1M crossbar to accelerate matrixvector multiplication. 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC), 2016, 1

[40]
Hu M, Graves C E, Li C, et al. Memristorbased analog computation and neural network classification with a dot product engine. Adv Mater, 2018, 30, 1705914

[41]
Li C, Hu M, Li Y, et al. Analogue signal and image processing with large memristor crossbars. Nat Electron, 2018, 1, 52

[42]
Liu M Y, Xia L X, Wang Y, et al. Algorithmic fault detection for RRAMbased matrix operations. ACM Trans Des Autom Electron Syst, 2020, 25, 1

[43]
Wang M Q, Deng N, Wu H Q, et al. Theory study and implementation of configurable ECC on RRAM memory. 2015 15th NonVolatile Memory Technology Symposium (NVMTS), 2015, 1

[44]
Niu D M, Yang X, Yuan X. Low power memristorbased ReRAM design with Error Correcting Code. 17th Asia and South Pacific Design Automation Conference, 2012, 79

[45]
Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators. Neural Networks, 1989, 2, 359

[46]
LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521, 436

[47]
Ledig C, Theis L, Huszár F, et al. Photorealistic single image superresolution using a generative adversarial network. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, 105

[48]
Hochreiter S, Schmidhuber J. Long shortterm memory. Neural Comput, 1997, 9, 1735

[49]
Chen Y H, Krishna T, Emer J S, et al. Eyeriss: an energyefficient reconfigurable accelerator for deep convolutional neural networks. IEEE J SolidState Circuits, 2017, 52, 127

[50]
DeepBench, Baidu. https://github.com/baiduresearch/DeepBench

[51]
Adolf R, Rama S, Reagen B, et al. Fathom: reference workloads for modern deep learning methods. 2016 IEEE International Symposium on Workload Characterization (IISWC), 2016, 1

[52]
Huang X D, Li Y, Li H Y, et al. Formingfree, fast, uniform, and high endurance resistive switching from cryogenic to high temperatures in W/AlO_{x}/Al_{2}O_{3}/Pt bilayer memristor. IEEE Electron Device Lett, 2020, 41, 549

[53]
Choi S, Tan S H, Li Z F, et al. SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations. Nat Mater, 2018, 17, 335

[54]
Li Y B, Wang Z R, Midya R, et al. Review of memristor devices in neuromorphic computing: Materials sciences and device challenges. J Phys D, 2018, 51, 503002

[55]
Kim S G, Han J S, Kim H, et al. Recent advances in memristive materials for artificial synapses. Adv Mater Technol, 2018, 3, 1800457

[56]
Xia Q F, Yang J J. Memristive crossbar arrays for braininspired computing. Nat Mater, 2019, 18, 309

[57]
Zhu J D, Zhang T, Yang Y C, et al. A comprehensive review on emerging artificial neuromorphic devices. Appl Phys Rev, 2020, 7, 011312

[58]
Cristiano G, Giordano M, Ambrogio S, et al. Perspective on training fully connected networks with resistive memories: Device requirements for multiple conductances of varying significance. J Appl Phys, 2018, 124, 151901

[59]
Agarwal S, Plimpton S J, Hughart D R, et al. Resistive memory device requirements for a neural algorithm accelerator. 2016 International Joint Conference on Neural Networks (IJCNN), 2016, 929

[60]
Tsai H, Ambrogio S, Narayanan P, et al. Recent progress in analog memorybased accelerators for deep learning. J Phys D, 2018, 51, 283001

[61]
Chen P Y, Peng X C, Yu S M. NeuroSim: A circuitlevel macro model for benchmarking neuroinspired architectures in online learning. IEEE Trans ComputAided Des Integr Circuits Syst, 2018, 37, 3067

[62]
Yan B N, Li B, Qiao X M, et al. Resistive memorybased inmemory computing: From device and largescale integration system perspectives. Adv Intell Syst, 2019, 1, 1900068

[63]
Chen J, Lin C Y, Li Y, et al. LiSiO_{x}based analog memristive synapse for neuromorphic computing. IEEE Electron Device Lett, 2019, 40, 542

[64]
Oh S, Kim T, Kwak M, et al. HfZrO_{x}based ferroelectric synapse device with 32 levels of conductance states for neuromorphic applications. IEEE Electron Device Lett, 2017, 38, 732

[65]
Park J, Kwak M, Moon K, et al. TiO_{x}based RRAM synapse with 64levels of conductance and symmetric conductance change by adopting a hybrid pulse scheme for neuromorphic computing. IEEE Electron Device Lett, 2016, 37, 1559

[66]
Cheng Y, Wang C, Chen H B, et al. A largescale inmemory computing for deep neural network with trained quantization. Integration, 2019, 69, 345

[67]
Yang Q, Li H, Wu Q. A quantized training method to enhance accuracy of ReRAMbased neuromorphic systems. 2018 IEEE International Symposium on Circuits and Systems (ISCAS), 2018, 1

[68]
Yu S M, Li Z W, Chen P Y, et al. Binary neural network with 16 Mb RRAM macro chip for classification and online training. 2016 IEEE International Electron Devices Meeting (IEDM), 2016, 16.2.1

[69]
Bayat F M, Prezioso M, Chakrabarti B, et al. Implementation of multilayer perceptron network with highly uniform passive memristive crossbar circuits. Nat Commun, 2018, 9, 2331

[70]
Yao P, Wu H Q, Gao B, et al. Face classification using electronic synapses. Nat Commun, 2017, 8, 15199

[71]
Liu Q, Gao B, Yao P, et al. A fully integrated analog ReRAM based 78.4TOPS/W computeinmemory chip with fully parallel MAC computing. 2020 IEEE International Solid State Circuits Conference (ISSCC), 2020, 500

[72]
Li C, Belkin D, Li Y N, et al. Efficient and selfadaptive in situ learning in multilayer memristor neural networks. Nat Commun, 2018, 9, 2385

[73]
Simonyan K, Zisserman A. Very deep convolutional networks for largescale image recognition. arXiv: 1409.1556, 2014

[74]
Cai F, Correll J M, Lee S H, et al. A fully integrated reprogrammable memristor–CMOS system for efficient multiply–accumulate operations. Nat Electron, 2019, 2, 290

[75]
LeCun Y. LeNet5, convolutional neural networks. URL: http://yann.lecun.com/exdb/lenet, 2015, 20, 14

[76]
He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, 770

[77]
Deguchi Y, Maeda K, Suzuki S, et al. Errorreduction controller techniques of TaO_{x}based ReRAM for deep neural networks to extend dataretention lifetime by over 1700x. 2018 IEEE Int Mem Work IMW, 2018, 1

[78]
Chen J, Pan W Q, Li Y, et al. Highprecision symmetric weight update of memristor by gate voltage ramping method for convolutional neural network accelerator. IEEE Electron Device Lett, 2020, 41, 353

[79]
Wu K C, Wang X P, Li M. Better performance of memristive convolutional neural network due to stochastic memristors. International Symposium on Neural Networks, 2019, 39

[80]
Xiang Y C, Huang P, Zhao Y D, et al. Impacts of state instability and retention failure of filamentary analog RRAM on the performance of deep neural network. IEEE Trans Electron Devices, 2019, 66, 4517

[81]
Pan W Q, Chen J, Kuang R, et al. Strategies to improve the accuracy of memristorbased convolutional neural networks. IEEE Trans Electron Devices, 2020, 67, 895

[82]
Gokmen T, Onen M, Haensch W. Training deep convolutional neural networks with resistive crosspoint devices. Front Neurosci, 2017, 11, 538

[83]
Lin Y H, Wang C H, Lee M H, et al. Performance impacts of analog ReRAM nonideality on neuromorphic computing. IEEE Trans Electron Devices, 2019, 66, 1289

[84]
Gao L G, Chen P Y, Yu S M. Demonstration of convolution kernel operation on resistive crosspoint array. IEEE Electron Device Lett, 2016, 37, 870

[85]
Kwak M, Park J, Woo J, et al. Implementation of convolutional kernel function using 3D TiO_{x} resistive switching devices for image processing. IEEE Trans Electron Devices, 2018, 65, 4716

[86]
Huo Q, Song R J, Lei D Y, et al. Demonstration of 3D convolution kernel function based on 8layer 3D vertical resistive random access memory. IEEE Electron Device Lett, 2020, 41, 497

[87]
Yao P, Wu H Q, Gao B, et al. Fully hardwareimplemented memristor convolutional neural network. Nature, 2020, 577, 641

[88]
Chen W H, Dou C, Li K X, et al. CMOSintegrated memristive nonvolatile computinginmemory for AI edge processors. Nat Electron, 2019, 2, 420

[89]
Xue C X, Chang T W, Chang T C, et al. Embedded 1Mb ReRAMbased computinginmemory macro with multibit input and weight for CNNbased AI edge processors. IEEE J SolidState Circuits, 2020, 55, 203

[90]
Chen F, Song L H, Chen Y R. ReGAN: A pipelined ReRAMbased accelerator for generative adversarial networks. 2018 23rd Asia and South Pacific Design Automation Conference (ASPDAC), 2018, 178.

[91]
Lin Y D, Wu H Q, Gao B, et al. Demonstration of generative adversarial network by intrinsic random noises of analog RRAM devices. 2018 IEEE International Electron Devices Meeting (IEDM), 2018, 3.4.1

[92]
Li C, Wang Z R, Rao M Y, et al. Long shortterm memory networks in memristor crossbar arrays. Nat Mach Intell, 2019, 1, 49

[93]
Tsai H, Ambrogio S, Mackin C, et al. Inference of LongShort Term Memory networks at softwareequivalent accuracy using 2.5M analog phase change memory devices. 2019 Symposium on VLSI Technology, 2019

[94]
Smagulova K, Krestinskaya O, James A P. A memristorbased long short term memory circuit. Analog Integr Circ Sig Process, 2018, 95, 467

[95]
Wen S P, Wei H Q, Yang Y, et al. Memristive LSTM network for sentiment analysis. IEEE Trans Syst Man Cybern: Syst, 2019, 1

[96]
Smagulova K, James A P. A survey on LSTM memristive neural network architectures and applications. Eur Phys J Spec Top, 2019, 228, 2313

[97]
Yin S H, Sun X Y, Yu S M, et al. A parallel RRAM synaptic array architecture for energyefficient recurrent neural networks. 2018 IEEE International Workshop on Signal Processing Systems (SiPS), 2018, 13

[98]
Zidan M A, Jeong Y, Lee J, et al. A general memristorbased partial differential equation solver. Nat Electron, 2018, 1, 411

[99]
Jouppi N P, Young C, Patil N, et al. Indatacenter performance analysis of a tensor processing unit. Proceedings of the 44th Annual International Symposium on Computer Architecture, 2017

[100]
Sun Z, Pedretti G, Ambrosi E, et al. Solving matrix equations in one step with crosspoint resistive arrays. PNAS, 2019, 116, 4123

[101]
Sun Z, Ambrosi E, Pedretti G, et al. Inmemory PageRank accelerator with a crosspoint array of resistive memories. IEEE Trans Electron Devices, 2020, 67, 1466

[102]
Sun Z, Pedretti G, Ielmini D. Fast solution of linear systems with analog resistive switching memory (RRAM). 2019 IEEE International Conference on Rebooting Computing (ICRC), 2019, 1

[103]
Sun Z, Pedretti G, Mannocci P, et al. Time complexity of inmemory solution of linear systems. IEEE Trans Electron Devices, 2020, 67, 2945

[104]
Sun Z, Pedretti G, Ambrosi E, et al. Inmemory eigenvector computation in time O (1). Adv Intell Syst, 2020, 2, 2000042

[105]
Feng Y, Zhan X P, Chen J Z. Flash memory based computinginmemory to solve timedependent partial differential equations. 2020 IEEE Silicon Nanoelectronics Workshop (SNW), 2020, 27

[106]
Zhou H L, Zhao Y H, Xu G X, et al. Chipscale optical matrix computation for PageRank algorithm. IEEE J Sel Top Quantum Electron, 2020, 26, 1

[107]
Milo V, Malavena G, Compagnoni C M, et al. Memristive and CMOS devices for neuromorphic computing. Materials, 2020, 13, 166

[108]
Ambrogio S, Narayanan P, Tsai H, et al. Equivalentaccuracy accelerated neuralnetwork training using analogue memory. Nature, 2018, 558, 60

[109]
Jerry M, Chen P Y, Zhang J C, et al. Ferroelectric FET analog synapse for acceleration of deep neural network training. 2017 IEEE International Electron Devices Meeting (IEDM), 2017, 6.2.1

[110]
Guo X, Bayat F M, Bavandpour M, et al. Fast, energyefficient, robust, and reproducible mixedsignal neuromorphic classifier based on embedded NOR flash memory technology. 2017 IEEE International Electron Devices Meeting (IEDM), 2017, 6.5.1

[111]
Bichler O, Suri M N, Querlioz D, et al. Visual pattern extraction using energyefficient “2PCM synapse” neuromorphic architecture. IEEE Trans Electron Devices, 2012, 59, 2206

[112]
Suri M N, Bichler O, Querlioz D, et al. Phase change memory as synapse for ultradense neuromorphic systems: Application to complex visual pattern extraction. 2011 International Electron Devices Meeting, 2011, 4.4.1

[113]
Burr G W, Shelby R M, Sidler S, et al. Experimental demonstration and tolerancing of a largescale neural network (165 000 synapses) using phasechange memory as the synaptic weight element. IEEE Trans Electron Devices, 2015, 62, 3498

[114]
Oh S, Huang Z S, Shi Y H, et al. The impact of resistance drift of phase change memory (PCM) synaptic devices on artificial neural network performance. IEEE Electron Device Lett, 2019, 40, 1325

[115]
Spoon K, Ambrogio S, Narayanan P, et al. Accelerating deep neural networks with analog memory devices. 2020 IEEE International Memory Workshop (IMW), 2020, 1

[116]
Chen L, Wang T Y, Dai Y W, et al. Ultralow power Hf_{0.5}Zr_{0.5}O_{2} based ferroelectric tunnel junction synapses for hardware neural network applications. Nanoscale, 2018, 10, 15826

[117]
Boyn S, Grollier J, Lecerf G, et al. Learning through ferroelectric domain dynamics in solidstate synapses. Nat Commun, 2017, 8, 14736

[118]
Hu V P H, Lin H H, Zheng Z A, et al. Splitgate FeFET (SGFeFET) with dynamic memory window modulation for nonvolatile memory and neuromorphic applications. 2019 Symposium on VLSI Technology, 2019

[119]
Sun X Y, Wang P N, Ni K, et al. Exploiting hybrid precision for training and inference: A 2T1FeFET based analog synaptic weight cell. 2018 IEEE International Electron Devices Meeting (IEDM), 2018, 3.1.1

[120]
Lee S T, Kim H, Bae J H, et al. Highdensity and highlyreliable binary neural networks using NAND flash memory cells as synaptic devices. 2019 IEEE International Electron Devices Meeting (IEDM), 2019, 38.4.1

[121]
Bavandpour M, Sahay S, Mahmoodi M R, et al. 3DaCortex: An ultracompact energyefficient neurocomputing platform based on commercial 3DNAND flash memories. arXiv: 1908.02472, 2019

[122]
Xiang Y C, Huang P, Han R Z, et al. Efficient and robust spikedriven deep convolutional neural networks based on NOR flash computing array. IEEE Trans Electron Devices, 2020, 67, 2329

[123]
Xiang Y C, Huang P, Yang H Z, et al. Storage reliability of multibit flash oriented to deep neural network. 2019 IEEE International Electron Devices Meeting (IEDM), 2019, 38.2.1