Home| Contact Us| New Journals| Browse Journals| Journal Prices| For Authors|

Print ISSN: 0976-898X
Online ISSN:
0976-8998


  About JNT
  DLINE Portal Home
Home
Aims & Scope
Editorial Board
Current Issue
Next Issue
Previous Issue
Sample Issue
Upcoming Conferences
Self-archiving policy
Alert Services
Be a Reviewer
Publisher
Paper Submission
Subscription
Contact us
 
  How To Order
  Order Online
Price Information
Request for Complimentary
Print Copy
 
  For Authors
  Guidelines for Contributors
Online Submission
Call for Papers
Author Rights
 
 
RELATED JOURNALS
Journal of Digital Information Management (JDIM)
International Journal of Computational Linguistics Research (IJCL)
International Journal of Web Application (IJWA)

 

 
Journal of Networking Technology
 

Improving the Exibility of CNN Accelerators to Support Temporal Convolution Network (TCN)
Marco Carreras, Gianfranco Deriu, Paolo Meloni
Universita degli Studi di Cagliari, DIEE
Abstract: Computer Vision leads to large applications for image and video classification and segmentation. It provides the increasing focus of the community on this topic, has generated awide scope of approaches that use different kernel shapes and techniques for executing convolutions with respect to the classic one, such as for example separable convolutions, deformable convolutions or deconvolutions ([4, 5]), frequently used in semantic segmentation tasks ([23, 13]). It has been found that it is common knowledge that FPGAs can be used to accelerate classic Convolutional layers in CNNs, there is limited literature about FPGA-based accelerators supporting less regular and common processing kernels ([20]). This work begins from the previous experience acquired developing NEURAghe, we try to improve exibility of CNN accelerators and to study new methodologies to improve efficiency on the previously mentioned use-cases. Initially we focus on layered approaches based on 1D convolutions, that, as indicated by several recent research results, can be effectively used to classify and segment time series and sequences, as well as in tasks involving sequence modeling. In multiple scenarios a convolution approach applied on the time dimension, hereafter called Temporal Convolution Network (TCN) can outperform classic strategies relying on recurrent networks in terms of accuracy and training time. We modified NEURAghe to support TCN and validate results on an ECG-classification benchmark, achieving up to 95% efficiency in terms of GOPS/s with respect to the accelerator peak performance.
Keywords: Temporal Convolutional Neural Network, TCN, Hardware Accelerator, FPGA Improving the Exibility of CNN Accelerators to Support Temporal Convolution Network (TCN)
DOI:https://doi.org/10.6025/jnt/2020/11/3/93-102
Full_Text   PDF 392 KB   Download:   110  times
References:

[1] Deep image: Scaling up image recognition. CoRR abs/1501.02876 (2015), http://arxiv.org/abs/1501.02876, withdrawn.
[2] Bai, S., Kolter, J. Z., Koltun, V. (2018). An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271.
[3] Cho, K., Merrinenboer, Van., B., Bahdanau, D., Bengio, Y. (2014). On the properties of neural machine translation: Encoderdecoder approaches. arXiv preprint arXiv:1409.1259.
[4] Chollet, F. (2016). Xception: Deep learning with depthwise separable convolutions. CoRR abs/1610.02357, http://arxiv.org/abs/1610.02357
[5] Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y. (2017). Deformable convolutional networks. CoRR abs/1703.06211, http://arxiv.org/abs/1703.06211
[6] Dauphin, Y. N., Fan, A., Auli, M., Grangier, D. (2016). Language modeling with gated convolutional networks. CoRR abs/1612.08083, http://arxiv.org/abs/1612.08083
[7] Goodfellow, I., Bengio, Y., Courville, A. (2016). Deep learning. MIT press.
[8] Goodfellow, S., Goodwin, A., Eytan, D., Greer, R., Mazwi, M., Laussen, P. (2018). Towards understanding ecg rhythm classification using convolutional neural networks and attention mappings.
[9] Guan, Y., Yuan, Z., Sun, G., Cong, J. (2017). Fpga-based accelerator for long short-term memory recurrent neural networks. In: 2017 22nd Asia and South Pacic Design Automation Conference (ASP-DAC). p. 629-634. https://doi.org/10.1109/ASPDAC.2017.7858394
[10] Hannun, A. Y., Case, C., Casper, J., Catanzaro, B., Diamos, G., Elsen, E., Prenger, R., Satheesh, S., Sengupta, S., Coates, A., Ng, A. Y. (2014). Deep speech: Scaling up end-toend speech recognition. CoRR abs/1412.5567, http://arxiv.org/abs/1412.5567
[11] He, K., Zhang, X., Ren, S., Sun, J. (2015). Deep residual learning for image recognition.CoRR abs/1512.03385, href="http://arxiv.org/">http://arxiv.org/abs/1512.03385
[12] Hochreiter, S., Schmidhuber, J. (1997). Long short-term memory. Neural computation 9 (8) 1735-1780.
[13] Jegou, S., Drozdzal, M., Vazquez, D., Romero, A., Bengio, Y. (2016). The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. CoRR abs/1611.09326, http://arxiv.org/abs/1611.09326
[14] Kalchbrenner, N., Espeholt, L., Simonyan, K., Oord, A.v.d., Graves, A., Kavukcuoglu, K. (2016). Neural machine translation in linear time. arXiv preprint arXiv:1610.10099.
[15] Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L. (2014). Largescale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. p. 1725- 1732.
[16] Kim, T. S., Reiter, A. (2017). Interpretable 3d human action analysis with temporal convolutional networks. CoRR abs/1704.04516, http://arxiv.org/abs/1704.04516
[17] Kim, Y. (2014). Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882.
[18] Krizhevsky, A., Sutskever, I., Hinton, G. E. (2012). Imagenet classication with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1. p. 1097-1105. NIPS’12, Curran Associates Inc., USA (2012), http://dl.acm.org/citation.cfm?id=2999134.2999257
[19] Li, D., Zhang, J., Zhang, Q., Wei, X. (2017). Classification of ecg signals based on 1d convolution neural network. In: 2017 IEEE 19th International Conference on e-Health Networking, Applications and Services (Healthcom). p. 1-6. https://doi.org/10.1109/HealthCom.2017.8210784
[20] Liu, B., Zou, D., Feng, L., Feng, S., Fu, P., Li, J. (2019). An fpga-based CNN accelerator integrating depthwise separable convolution. Electronics 8 (3) 281.
[21] Meloni, P., Capotondi, A., Deriu, G., Brian, M., Conti, F., Rossi, D., Rao, L., Benini, L. (2017). Neuraghe: Exploiting CPU-FPGA synergies for ecient and exible CNN inference acceleration on zynq socs. CoRR abs/1712.00994, http://arxiv.org/abs/1712.00994
[22] Mittal, S. (2018). A survey of fpga-based accelerators for convolutional neural networks. Neural Computing and Applications p. 1-31.
[23] Ronneberger, O., Fischer, P., Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. CoRR abs/1505.04597, http://arxiv.org/abs/1505.04597
[24] Santos, C. D., Zadrozny, B. (2014). Learning character-level representations for part-of speech tagging. In: Proceedings of the 31st International Conference on Machine Learning (ICML-14). p. 1818-1826.
[25] Taigman, Y., Yang, M., Ranzato, M., Wolf, L. (2014). Deepface: Closing the gap to human level performance in face verification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p. 1701-1708.
[26] Van Den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A. W., Kavukcuoglu, K. (2016). Wavenet: A generative model for raw audio. SSW 125.
[27] Wang, C., Gong, L., Yu, Q., Li, X., Xie, Y., Zhou, X. (2016). Dlau: A scalable deep learning accelerator unit on fpga. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 36 (3) 513-517.
[28] Weston, J. E. (2016). Dialog-based language learning. In: Advances in Neural Information Processing Systems. p. 829-837.
[29] Yang, J. B., Nguyen, M. N., San, P. P., Li, X. L., Krishnaswamy, S. (2015). Deep convolutional neural networks on multichannel time series for human activity recognition. In: Proceedings of the 24th International Conference on Artificial Intelligence. p. 3995-4001. IJCAI’15, AAAI Press (2015), http://dl.acm.org/citation.cfm?id=2832747.2832806
[30] Zeng, M., Nguyen, L. T., Yu, B., Mengshoel, O. J., Zhu, J., Wu, P., Zhang, J. (2014). Convolutional neural networks for human activity recognition using mobile sensors. In: 6th International Conference on Mobile Computing, Applications and Services. p. 197-205 (11 2014). https://doi.org/10.4108/icst.mobicase.2014.257786


Home | Aim & Scope | Editorial Board | Author Guidelines | Publisher | Subscription | Previous Issue | Contact Us |Upcoming Conferences|Sample Issues|Library Recommendation Form|

 

Copyright © 2011 dline.info