ARTICLES

Accelerating hybrid and compact neural networks targeting perception and control domains with coarse-grained dataflow reconfiguration

Zheng Wang1, , Libing Zhou2, Wenting Xie2, Weiguang Chen1, Jinyuan Su2, Wenxuan Chen2, Anhua Du2, Shanliao Li3, Minglan Liang3, Yuejin Lin2, Wei Zhao2, Yanze Wu4, Tianfu Sun1, Wenqi Fang1 and Zhibin Yu1

+ Author Affiliations

 Corresponding author: Zheng Wang, Email: zheng.wang@siat.ac.cn

PDF

Turn off MathJax

Abstract: Driven by continuous scaling of nanoscale semiconductor technologies, the past years have witnessed the progressive advancement of machine learning techniques and applications. Recently, dedicated machine learning accelerators, especially for neural networks, have attracted the research interests of computer architects and VLSI designers. State-of-the-art accelerators increase performance by deploying a huge amount of processing elements, however still face the issue of degraded resource utilization across hybrid and non-standard algorithmic kernels. In this work, we exploit the properties of important neural network kernels for both perception and control to propose a reconfigurable dataflow processor, which adjusts the patterns of data flowing, functionalities of processing elements and on-chip storages according to network kernels. In contrast to state-of-the-art fine-grained data flowing techniques, the proposed coarse-grained dataflow reconfiguration approach enables extensive sharing of computing and storage resources. Three hybrid networks for MobileNet, deep reinforcement learning and sequence classification are constructed and analyzed with customized instruction sets and toolchain. A test chip has been designed and fabricated under UMC 65 nm CMOS technology, with the measured power consumption of 7.51 mW under 100 MHz frequency on a die size of 1.8 × 1.8 mm2.

Key words: CMOS technologydigital integrated circuitsneural networksdataflow architecture



[1]
Chen Y, Krishna T, Emer J, et al. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J Solid-State Circuits, 2017, 52, 127 doi: 10.1109/JSSC.2016.2616357
[2]
Jouppi N, Young C, Patil N, et al. In-datacenter performance analysis of a tensor processing unit. ACM/IEEE International Symposium on Computer Architecture, 2017, 1
[3]
Chen Y, Tao L, Liu S, et al. DaDianNao: A machine-learning supercomputer. ACM/IEEE International Symposium on Microarchitecture, 2015, 609
[4]
Cong J, Xiao B. Minimizing computation in convolutional neural networks. Artificial Neural Networks and Machine Learning, 2014, 281
[5]
Yin S, Ouyang P, Tang S, et al. A high energy efficient reconfigurable hybrid neural network processor for deep learning applications. IEEE J Solid-State Circuits, 2017, 53, 968 doi: 10.1109/JSSC.2017.2778281
[6]
Russakovsky O, Deng J, Su H, et al. ImageNet large scale visual recognition challenge. Int J Comput Vision, 2015, 115, 211 doi: 10.1007/s11263-015-0816-y
[7]
Iandola F, Han S, Moskewicz M, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size. arXiv: 1602.07360, 2016
[8]
Howard A, Zhu M, Chen B, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv: 1704.04861, 2017
[9]
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014,
[10]
Yang C, Wang Y, Wang X, et al. A reconfigurable accelerator based on fast winograd algorithm for convolutional neural network in internet of things. IEEE International Conference on Solid-State and Integrated Circuit Technology, 2018, 1
[11]
Vasilache N, Johnson J, Mathieu M, et al. Fast convolutional nets with fbfft: A GPU performance evaluation. arXiv: 1412.7580, 2014
[12]
Guo K, Zeng S, Yu J, et al. A survey of FPGA-based neural network accelerator. arXiv: 1712.08934, 2017
[13]
Mnih V, Kavukcuoglu K, Silver D, et al. Playing atari with deep reinforcement learning. arXiv: 1312.5602, 2013
[14]
Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature, 2015, 518, 529 doi: 10.1038/nature14236
[15]
Silver D, Huang A, Maddison C, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 2016, 529, 484 doi: 10.1038/nature16961
[16]
Silver D, Schrittwieser J, Simonyan K, et al. Mastering the game of Go without human knowledge. Nature, 2017, 550, 354 doi: 10.1038/nature24270
[17]
Chen Y, Emer J, Sze V. Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks. ACM/IEEE International Symposium on Computer Architecture, 2016, 44, 367
[18]
Gers F, Schmidhuber J, Cummins F. Learning to forget: Continual prediction with LSTM. 9th International Conference on Artificial Neural Networks, 1999, 850 doi: 10.1049/cp:19991218
[19]
Basterretxea K, Tarela J, Del C. Approximation of sigmoid function and the derivative for hardware implementation of artificial neurons. IEE Proc Circuits, Devices Syst, 2004, 151, 18
[20]
Sutton R, Barto A. Reinforcement learning: An introduction. MIT Press, 2018
[21]
Gulli A, Sujit P. Deep learning with Keras. Packt Publishing Ltd, 2017
[22]
Li S, Ouyang N, Wang Z. Accelerator design for convolutional neural network with vertical data streaming. IEEE Asia Pacific Conference on Circuits and Systems, 2018, 544
[23]
Guo Y. Fixed point quantization of deep convolutional networks. International Conference on Machine Learning, 2016, 2849
[24]
[25]
Chen W, Wang Z, Li S, et al. Accelerating compact convolutional neural networks with multi-threaded data streaming. IEEE Computer Society Annual Symposium on VLSI, 2019, 519
[26]
MitchellSpryn solving a maze with Q learning. www.mitchellspryn.com/2017/10/28/Solving-A-Maze-With-Q-Learning.html
[27]
Liang M, Chen M, Wang Z. A CGRA based neural network inference engine for deep reinforcement learning. IEEE Asia Pacific Conference on Circuits and Systems, 2018, 519
[28]
Chen Y, Krishna T, Emer J, et al. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE International Solid-State Circuits Conference (ISSCC), 2016, 127
[29]
Moons B, Uytterhoeven R, Dehaene W, et al. ENVISION: A 0.26-to-10 TOPS/W subword-parallel dynamic-voltage-accuracy-frequency-scalable convolutional neural network processor in 28 nm FDSOI. IEEE International Solid-State Circuits Conference (ISSCC), 2017, 246
[30]
Yin S, Ouyang P, Tang S, et al. 1.06-to-5.09 TOPS/W reconfigurable hybrid-neural-network processor for deep learning applications. Symposium on VLSI Circuits, 2017
Fig. 1.  (Color online) Structure of hybrid neural network targeting perception and control with layer-wise algorithmic kernels.

Fig. 2.  Orientation and dimensions of compact CNN filters.

Fig. 3.  (Color online) Structure and operation distribution for Mobile-Net.

Fig. 4.  (Color online) Reconfiguration of dataflow, PE and storage functionalities for standard kernels.

Fig. 5.  (Color online) Reconfiguration of dataflow for pointwise (PW) and depthwise (DW) convolution kernels.

Fig. 6.  (Color online) Microarchitecture of proposed reconfigurable dataflow processor.

Fig. 7.  (Color online) Instruction set architecture (ISA) and developing toolchain.

Fig. 8.  (Color online) Operational phases of proposed architecture.

Fig. 9.  (Color online) Comparison of Q iteration time between proposed architecture and host machine (CPU)[27].

Fig. 10.  (Color online) (a) ASIC layout with 16 reconfigurable PEs. Logics (middle) surrounded by 18 SRAM blocks. (b) Micrograph of taped-out chip with UMC CMOS 65 nm low-leakage technology.

Fig. 11.  (Color online) Views of the testing board. The front view contains testing IC under CLCC84 packaging and socket. The rear view contains FPGA for interfacing IC with the host machine.

Fig. 12.  (Color online) Testing infrastructure with measurement of both signal voltages and currents.

Fig. 13.  (Color online) Runtime current measurement across different phases of operation under 30 MHz frequency.

Table 1.   Operation characteristics among multiple standard neural network kernels.

NN LayerConvolutionPoolingFCLSTMState-actionShortcut
OperandsSparse matrixVectorDense matrixDense matrixDense matrixVector
OperatorsSum of product (SoP)Max, min, meanSoPSoP vector multiply vector sumSoPVector sum
Nonlinear functionsReLU sigmoidNoneReLU sigmoidSigmoid tangentReLU sigmoidNone
Dataflow propertySerial in/out thread-level parallelismParallel in/outSerial in/out thread-level parallelismSerial in/out shared among gatesSerial in/out action nodes iterationParallel in/out
Buffering propertyActivation dominantActivation dominantWeight dominantWeight, statesWeight, states, actionsActivation pointer
DownLoad: CSV

Table 2.   Number of operations of standard, DW and PW convolution layers.

LayerFilter sizeInput sizeMAC amounts
Standard conv${D_{\rm{K}}} \times {D_{\rm{K}}} \times M \times N$${D_{\rm{F}}} \times {D_{\rm{F} } } \times M$${D_{\rm{K}}} \cdot {D_{\rm{K}}} \cdot M \cdot N \cdot {D_{\rm{F}}} \cdot {D_{\rm{F}}}$
Conv DW${ {D_{\rm{K}}} \times {D_{\rm{K}}} \times M}$${D_{\rm{F}}} \times {D_{\rm{F}}} \times M$${ {D_{\rm{K}}} \cdot {D_{\rm{K}}} \cdot M \cdot {D_{\rm{F}}} \cdot {D_{\rm{F}}} }$
Conv PW${1 \times 1 \times M \times N}$${D_{\rm{F}}} \times {D_{\rm{F}}} \times M$${M \cdot N \cdot {D_{\rm{F}}} \cdot {D_{\rm{F}}} }$
DownLoad: CSV

Table 3.   Benchmark of performance for MobileNet with proposed architecture[25].

Layer typeInput size#. MACsMulti-threaded streaming architecture @ 100 MHzSingle-threaded latency (ms)[24]
Max BW utilizationMax PE utilization#. streamns / streamLatency (ms)
Conv0 Std.224 × 224 × 310.84M3%6.70%25088334083.883.8
Conv1 DW112 × 112 × 323.61M10%6.70%25088208017.4380.5
Conv1 PW112 × 112 × 3225.69M100%100.00%313618355.892.1
Conv2 DW112 × 112 × 641.81M10%6.70%1254420808.7190.2
Conv2 PW56 × 56 × 6425.69M100%100.00%156835205.588.3
Conv3 DW56 × 56 × 1283.61M10%6.70%25088208017.4380.5
Conv3 PW56 × 56 × 12851.38M100%100.00%1568689010.8172.9
Conv4 DW56 × 56 × 1280.90M10%6.70%627220804.395.1
Conv4 PW28 × 28 × 12825.69M100%100.00%78468905.486.4
Conv5 DW28 × 28 × 2561.81M10%6.70%1254420808.7190.2
Conv5 PW28 × 28 × 25651.38M100%100.00%7841363010.7171
Conv6 DW28 × 28 × 2560.45M10%6.70%313620802.247.6
Conv6 PW14 × 14 × 25625.69M100%100.00%416136305.790.7
Conv7-11DW14 × 14 × 5120.90M10%6.70%627220804.395.1
Conv7-11PW14 × 14 × 51251.38M100%100.00%4162711011.3180.4
Conv12 DW14 × 14 × 5120.23M10%6.70%156820801.123.8
Conv12 PW7 × 7 × 51225.69M100%100.00%256271106.9111
Conv13 DW7 × 7 × 10240.45M10%6.70%313620802.247.6
Conv13 PW7 × 7 × 102451.38M100%100.00%2565407013.8221.5
Avg Pool7 × 7 × 10240.05M10%6.70%6417670.10.1
FC1 × 1 × 10241.02M55%6.70%63902185.75.7
Total569M294.33856.5
DownLoad: CSV

Table 4.   Benchmark of performance for LSTM networks among three processing architectures.

Network layer specification1st LSTM layer2nd LSTM layer (if need)1st FC layer (if need)2nd FC layer
In nodes: 3, Out nodes: 12, Recurrent nodes: 48In nodes: 12, Out nodes: 12, Recurrent nodes: 48In nodes: 12, Out nodes: 12In nodes: 12, Out nodes: 5
NetworkPerformance (ms/sample)Average power consumption
1 LSTM + 1 FC2 LSTM + 1 FC2 LSTM + 2 FC
CPU Intel i7-8700 @3.20 GHz11.98122.36223.96260–70 W
CPU Intel i7 w. GPU NVIDIA GTX 10502.874.945.7450–70 W
Proposed design with 16 PEs @ 100 MHz *1.0331.1571.95730–50 mW
* Simulation result, not account for data transferring between disk storage and DRAM.
DownLoad: CSV

Table 5.   Runtime power consumption in mW for different phases and frequencies.

Frequency (MHz)InitializeConv1Pool1Conv2Pool2FCIdleAvg. (conv1-fc)
301.922.842.583.042.713.321.922.62
603.325.294.765.475.095.743.324.71
1005.198.567.678.718.268.975.197.51
DownLoad: CSV

Table 6.   Comparison of physical properties with state-of-the-art designs.

ParameterEyeriss[28]ENVISION[29]Thinker[30]This workThis work
Technology (nm)6528656565
Core area (mm2)12.251.8719.363.243.24
Bit precision (b)164/8/168/1688
Num. of MACs168512102416256
Core frequency (MHz)200200200100100
Performance (GOPS)67.676 368.4 3.2 51.2
Power (mW)278442907.51 (measured)55.4 (estimated)
Energy efficiency166.2 GOPS/W1.73 TOPS/W1.27 TOPS/W426 GOPS/W0.92 TOPS/W
DownLoad: CSV
[1]
Chen Y, Krishna T, Emer J, et al. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J Solid-State Circuits, 2017, 52, 127 doi: 10.1109/JSSC.2016.2616357
[2]
Jouppi N, Young C, Patil N, et al. In-datacenter performance analysis of a tensor processing unit. ACM/IEEE International Symposium on Computer Architecture, 2017, 1
[3]
Chen Y, Tao L, Liu S, et al. DaDianNao: A machine-learning supercomputer. ACM/IEEE International Symposium on Microarchitecture, 2015, 609
[4]
Cong J, Xiao B. Minimizing computation in convolutional neural networks. Artificial Neural Networks and Machine Learning, 2014, 281
[5]
Yin S, Ouyang P, Tang S, et al. A high energy efficient reconfigurable hybrid neural network processor for deep learning applications. IEEE J Solid-State Circuits, 2017, 53, 968 doi: 10.1109/JSSC.2017.2778281
[6]
Russakovsky O, Deng J, Su H, et al. ImageNet large scale visual recognition challenge. Int J Comput Vision, 2015, 115, 211 doi: 10.1007/s11263-015-0816-y
[7]
Iandola F, Han S, Moskewicz M, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and < 0.5 MB model size. arXiv: 1602.07360, 2016
[8]
Howard A, Zhu M, Chen B, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv: 1704.04861, 2017
[9]
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014,
[10]
Yang C, Wang Y, Wang X, et al. A reconfigurable accelerator based on fast winograd algorithm for convolutional neural network in internet of things. IEEE International Conference on Solid-State and Integrated Circuit Technology, 2018, 1
[11]
Vasilache N, Johnson J, Mathieu M, et al. Fast convolutional nets with fbfft: A GPU performance evaluation. arXiv: 1412.7580, 2014
[12]
Guo K, Zeng S, Yu J, et al. A survey of FPGA-based neural network accelerator. arXiv: 1712.08934, 2017
[13]
Mnih V, Kavukcuoglu K, Silver D, et al. Playing atari with deep reinforcement learning. arXiv: 1312.5602, 2013
[14]
Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature, 2015, 518, 529 doi: 10.1038/nature14236
[15]
Silver D, Huang A, Maddison C, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 2016, 529, 484 doi: 10.1038/nature16961
[16]
Silver D, Schrittwieser J, Simonyan K, et al. Mastering the game of Go without human knowledge. Nature, 2017, 550, 354 doi: 10.1038/nature24270
[17]
Chen Y, Emer J, Sze V. Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks. ACM/IEEE International Symposium on Computer Architecture, 2016, 44, 367
[18]
Gers F, Schmidhuber J, Cummins F. Learning to forget: Continual prediction with LSTM. 9th International Conference on Artificial Neural Networks, 1999, 850 doi: 10.1049/cp:19991218
[19]
Basterretxea K, Tarela J, Del C. Approximation of sigmoid function and the derivative for hardware implementation of artificial neurons. IEE Proc Circuits, Devices Syst, 2004, 151, 18
[20]
Sutton R, Barto A. Reinforcement learning: An introduction. MIT Press, 2018
[21]
Gulli A, Sujit P. Deep learning with Keras. Packt Publishing Ltd, 2017
[22]
Li S, Ouyang N, Wang Z. Accelerator design for convolutional neural network with vertical data streaming. IEEE Asia Pacific Conference on Circuits and Systems, 2018, 544
[23]
Guo Y. Fixed point quantization of deep convolutional networks. International Conference on Machine Learning, 2016, 2849
[24]
[25]
Chen W, Wang Z, Li S, et al. Accelerating compact convolutional neural networks with multi-threaded data streaming. IEEE Computer Society Annual Symposium on VLSI, 2019, 519
[26]
MitchellSpryn solving a maze with Q learning. www.mitchellspryn.com/2017/10/28/Solving-A-Maze-With-Q-Learning.html
[27]
Liang M, Chen M, Wang Z. A CGRA based neural network inference engine for deep reinforcement learning. IEEE Asia Pacific Conference on Circuits and Systems, 2018, 519
[28]
Chen Y, Krishna T, Emer J, et al. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE International Solid-State Circuits Conference (ISSCC), 2016, 127
[29]
Moons B, Uytterhoeven R, Dehaene W, et al. ENVISION: A 0.26-to-10 TOPS/W subword-parallel dynamic-voltage-accuracy-frequency-scalable convolutional neural network processor in 28 nm FDSOI. IEEE International Solid-State Circuits Conference (ISSCC), 2017, 246
[30]
Yin S, Ouyang P, Tang S, et al. 1.06-to-5.09 TOPS/W reconfigurable hybrid-neural-network processor for deep learning applications. Symposium on VLSI Circuits, 2017
  • Search

    Advanced Search >>

    GET CITATION

    shu

    Export: BibTex EndNote

    Article Metrics

    Article views: 5241 Times PDF downloads: 115 Times Cited by: 0 Times

    History

    Received: 08 October 2019 Revised: 16 December 2019 Online: Uncorrected proof: 25 December 2019Accepted Manuscript: 26 December 2019Published: 11 February 2020

    Catalog

      Email This Article

      User name:
      Email:*请输入正确邮箱
      Code:*验证码错误
      Zheng Wang, Libing Zhou, Wenting Xie, Weiguang Chen, Jinyuan Su, Wenxuan Chen, Anhua Du, Shanliao Li, Minglan Liang, Yuejin Lin, Wei Zhao, Yanze Wu, Tianfu Sun, Wenqi Fang, Zhibin Yu. Accelerating hybrid and compact neural networks targeting perception and control domains with coarse-grained dataflow reconfiguration[J]. Journal of Semiconductors, 2020, 41(2): 022401. doi: 10.1088/1674-4926/41/2/022401 Z Wang, L B Zhou, W T Xie, W G Chen, J Y Su, W X Chen, A H Du, S L Li, M L Liang, Y J Lin, W Zhao, Y Z Wu, T F Sun, W Q Fang, Z B Yu, Accelerating hybrid and compact neural networks targeting perception and control domains with coarse-grained dataflow reconfiguration[J]. J. Semicond., 2020, 41(2): 022401. doi: 10.1088/1674-4926/41/2/022401.Export: BibTex EndNote
      Citation:
      Zheng Wang, Libing Zhou, Wenting Xie, Weiguang Chen, Jinyuan Su, Wenxuan Chen, Anhua Du, Shanliao Li, Minglan Liang, Yuejin Lin, Wei Zhao, Yanze Wu, Tianfu Sun, Wenqi Fang, Zhibin Yu. Accelerating hybrid and compact neural networks targeting perception and control domains with coarse-grained dataflow reconfiguration[J]. Journal of Semiconductors, 2020, 41(2): 022401. doi: 10.1088/1674-4926/41/2/022401

      Z Wang, L B Zhou, W T Xie, W G Chen, J Y Su, W X Chen, A H Du, S L Li, M L Liang, Y J Lin, W Zhao, Y Z Wu, T F Sun, W Q Fang, Z B Yu, Accelerating hybrid and compact neural networks targeting perception and control domains with coarse-grained dataflow reconfiguration[J]. J. Semicond., 2020, 41(2): 022401. doi: 10.1088/1674-4926/41/2/022401.
      Export: BibTex EndNote

      Accelerating hybrid and compact neural networks targeting perception and control domains with coarse-grained dataflow reconfiguration

      doi: 10.1088/1674-4926/41/2/022401
      More Information
      • Corresponding author: Email: zheng.wang@siat.ac.cn
      • Received Date: 2019-10-08
      • Revised Date: 2019-12-16
      • Published Date: 2020-02-01

      Catalog

        /

        DownLoad:  Full-Size Img  PowerPoint
        Return
        Return