References: [1] Frankle, J., Carbin, M. (2018). The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635. [2] Gilmer, J., Adams, R. P., Goodfellow, I., Andersen, D., Dahl, G. E. (2018). Motivating the rules of the game for adversarial example research. arXiv preprint arXiv:1807.06732. [3] Goodfellow, I. J., Shlens, J., Szegedy, C. (2015). Explaining and harnessing adversarial examples. In: ICLR. [4] Guidotti, D., Leofante, F., Castellini, C., Tacchella, A. (2019). Repairing learned controllers with convex optimization: A case study. In: CPAIOR. p. 364-373. [5] Guidotti, D., Leofante, F., Tacchella, A., Castellini, C. (2019). Improving reliability of myocontrol using formal verication. IEEE Transactions on Neural Systems and Rehabilitation Engineering 27 (4) 564-571. [6] Katz, G., Huang, D. A., Ibeling, D., Julian, K., Lazarus, C., Lim, R., Shah, P., Thakoor, S., Wu, H., Zeljic, A., Dill, D. L., Kochenderfer, M. J., Barrett, C. W. (2019). The marabou framework for verication and analysis of deep neural networks. In: CAV. p. 443-452. [7] LeCun, Y., Bengio, Y., Hinton, G. E. (2015). Deep learning. Nature 521(7553), 436-444. [8] Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F. E. (2017). A survey of deep neural network architectures and their applications. Neurocomputing 234, 11-26. [9] Nielsen, M. A. (2015). Neural networks and deep learning, vol. 25. Determination press San Francisco, CA, USA. [10] Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A. (2017). Automatic dierentiation in pytorch. [11] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12, 2825-2830. [12] Pulina, L., Tacchella, A. (2010). An abstraction-renement approach to verication of articial neural networks. In: CAV. p. 243- 257. [13] Rauber, J., Brendel, W., Bethge, M. (2017). Foolbox: A python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131 (2017), http://arxiv.org/abs/1707.04131 [14] Schwarz, M., Schulz, H., Behnke, S. (2015). Rgb-d object recognition and pose estimation based on pre-trained convolutional neural network features. In: ICRA. p. 1329-1335. [15] Simonyan, K., Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. [16] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. J., Fergus, R. (2014). Intriguing properties of neural networks. In: ICLR. [17] Tang, Y. (2013). Deep learning using linear support vector machines. arXiv preprint arXiv:1306.0239. [18] Torrey, L., Shavlik, J. (2010). Transfer learning. In: Handbook of research on machine learning applications and trends: algorithms, methods, and techniques, p. 242-264. IGI Global. [19] Wong, E., Schmidt, F., Metzen, J. H., Kolter, J. Z. (2018). Scaling provable adversarial defenses. In: NeurIPS. p. 8400-8409. |