Home| Contact Us| New Journals| Browse Journals| Journal Prices| For Authors|

Print ISSN:
Online ISSN:


  About JDP
  DLINE Portal Home
Home
Aims & Scope
Editorial Board
Current Issue
Next Issue
Previous Issue
Sample Issue
Upcoming Conferences
Self-archiving policy
Alert Services
Be a Reviewer
Publisher
Paper Submission
Subscription
Contact us
 
  How To Order
  Order Online
Price Information
Request for Complimentary
Print Copy
 
  For Authors
  Guidelines for Contributors
Online Submission
Call for Papers
Author Rights
 
 
RELATED JOURNALS
Journal of Digital Information Management (JDIM)
Journal of Multimedia Processing and Technologies (JMPT)
International Journal of Web Application (IJWA)

 

 
Journal of Data Processing
 

Repair of Convolutional Neural Networks using Convex Optimization: Preliminary Experiments
Dario Guidotti, Francesco Leofante
University of Genoa, Genoa, Italy, RWTH Aachen University, Aachen, Germany & University of Sassari, Sassari, Italy
Abstract: Recent public calls for the development of explainable and variable Artificial Intelligence (AI) led to a growing interest in formal verification and repair of machine-learned models. Despite the impressive progress that the learning community has made, models such as deep neural networks remain vulnerable to adversarial attacks, and their sheer size represents a major obstacle to formal analysis and implementation. In this paper, we present our current efforts to tackle repair of deep convolutional neural networks using ideas borrowed from Transfer Learning. Using results obtained on popular MNIST and CIFAR10 datasets, we show that models of deep convolutional neural networks can be transformed into simpler ones preserving their accuracy, and we discuss how formal repair through convex programming techniques could benefit from this process.
Keywords: Transfer Learning, Network Repair, Convex Optimization Repair of Convolutional Neural Networks using Convex Optimization: Preliminary Experiments
DOI:https://doi.org/10.6025/jdp/2020/10/2/62-70
Full_Text   PDF 1.96 MB   Download:   82  times
References:

[1] Frankle, J., Carbin, M. (2018). The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635.
[2] Gilmer, J., Adams, R. P., Goodfellow, I., Andersen, D., Dahl, G. E. (2018). Motivating the rules of the game for adversarial example research. arXiv preprint arXiv:1807.06732.
[3] Goodfellow, I. J., Shlens, J., Szegedy, C. (2015). Explaining and harnessing adversarial examples. In: ICLR. 
[4] Guidotti, D., Leofante, F., Castellini, C., Tacchella, A. (2019). Repairing learned controllers with convex optimization: A case study. In: CPAIOR. p. 364-373.
[5] Guidotti, D., Leofante, F., Tacchella, A., Castellini, C. (2019). Improving reliability of myocontrol using formal verication. IEEE Transactions on Neural Systems and Rehabilitation Engineering 27 (4) 564-571.
[6] Katz, G., Huang, D. A., Ibeling, D., Julian, K., Lazarus, C., Lim, R., Shah, P., Thakoor, S., Wu, H., Zeljic, A., Dill, D. L., Kochenderfer, M. J., Barrett, C. W. (2019). The marabou framework for verication and analysis of deep neural networks. In: CAV. p. 443-452.
[7] LeCun, Y., Bengio, Y., Hinton, G. E. (2015). Deep learning. Nature 521(7553), 436-444.
[8] Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F. E. (2017). A survey of deep neural network architectures and their applications. Neurocomputing 234, 11-26.
[9] Nielsen, M. A. (2015). Neural networks and deep learning, vol. 25. Determination press San Francisco, CA, USA.
[10] Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A. (2017). Automatic dierentiation in pytorch.
[11] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12, 2825-2830.
[12] Pulina, L., Tacchella, A. (2010). An abstraction-renement approach to verication of articial neural networks. In: CAV. p. 243- 257.
[13] Rauber, J., Brendel, W., Bethge, M. (2017). Foolbox: A python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131 (2017), http://arxiv.org/abs/1707.04131
[14] Schwarz, M., Schulz, H., Behnke, S. (2015). Rgb-d object recognition and pose estimation based on pre-trained convolutional neural network features. In: ICRA. p. 1329-1335.
[15] Simonyan, K., Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[16] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. J., Fergus, R. (2014). Intriguing properties of neural networks. In: ICLR.
[17] Tang, Y. (2013). Deep learning using linear support vector machines. arXiv preprint arXiv:1306.0239.
[18] Torrey, L., Shavlik, J. (2010). Transfer learning. In: Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, p. 242-264. IGI Global.
[19] Wong, E., Schmidt, F., Metzen, J. H., Kolter, J. Z. (2018). Scaling provable adversarial defenses. In: NeurIPS. p. 8400-8409.


Home | Aim & Scope | Editorial Board | Author Guidelines | Publisher | Subscription | Previous Issue | Contact Us |Upcoming Conferences|Sample Issues|Library Recommendation Form|

 

Copyright © 2011 dline.info