Home| Contact Us| New Journals| Browse Journals| Journal Prices| For Authors|

Print ISSN: 0976-4143
Online ISSN:
0976-4151


  About JISR
  DLINE Portal Home
Home
Aims & Scope
Editorial Board
Current Issue
Next Issue
Previous Issue
Sample Issue
Upcoming Conferences
Self-archiving policy
Alert Services
Be a Reviewer
Publisher
Paper Submission
Subscription
Contact us
 
  How To Order
  Order Online
Price Information
Request for Complimentary
Print Copy
 
  For Authors
  Guidelines for Contributors
Online Submission
Call for Papers
Author Rights
 
 
RELATED JOURNALS
Journal of Digital Information Management (JDIM)
International Journal of Computational Linguistics Research (IJCL)
International Journal of Web Application (IJWA)

 

 
Journal of Information Security Research

Deep Learning Optimization with MNIST and AutoEncoder Data sets
Ajeet K. Jain, PVRD Prasad Rao, K. Venkatesh Sharma
Research Scholar, Department of Computer Science and Engineering Koneru Lakshmaiah Education Foundation Vaddeswaram, AP, India (Association: Asst Prof. , CSE, KMIT, Hyderabad, India) ., Professor, CSE, KLEF, Vaddeswaram, AP, India ., Professor, CSE, CVR C
Abstract: Optimization algorithms are extensively used in machine learning where optimization techniques are deployed. With the use of deep learning, optimization approaches are widely used with the development of new features in Stochastic Gradient Descent to convex and non-convex and derivative-free approaches. The deep learning models can able to produce systems with speed and final performance that will improve the convexity principles. Highly enhanced optimizers can increase the complexity of the depth and data sets become larger that require good optimization. Thus, in this paper, we have studied the most used optimizer algorithm in a practical way. We have experimented it in MNIST and AutoEncoder data sets. We tested in a variety of applications that can document the common features and differences and suitability of applications. Further, we have presented new variants of optimizers. Finally, we are able to find the better optimizer with the help of extensive analyses.
Keywords: Deep Learning, Optimizers, ADAM, Yogi RMS Prop Deep Learning Optimization with MNIST and AutoEncoder Data sets
DOI:https://doi.org/10.6025/jisr/2022/13/1/21-28
Full_Text   PDF 1.02 MB   Download:   144  times
References:

[1] Ian Goodfellow., Yoshua Bengio., Aaron Courville. (2016). Deep Learning MIT Press, USA 2016
[2] Bishop, C.M., Neural Network for Pattern Recognition, Clarendon Press, USA 1995
[3] François Chollet. (2018). Deep Learning with Python, Manning Pub., 1st Ed, NY, USA, 2018
[4] Jain, Ajeet K., PVRD Prasad Rao., Venkatesh Sharma, K. (2021). A Perspective Analysis of Regularization and Optimization Techniques in Machine Learning, Computational Analysis and Understanding of Deep Learning or Medical Care: Principles, Methods and Applications. CUDLMC 2020, Wiley-Scrivener, April/May.
[5] Mueller, John Paul., Massaron, Luca. (2019). Deep Learning for Dummies, John Wiley, 2019
[6] Patterson, Josh., Gibson, Adam. (2017). Deep Learning: A Practitioner’s Approach, O’Reilly Pub. Indian Edition, 2017
[7] Jain, Ajeet, K., PVRD Prasad Rao., Sharma, Venkatesh. (2020). Deep Learning with Recursive Neural Network for Temporal Logic Implementation, International Journal of Advanced Trends in Computer Science and Engineering, 9 (4) July – August 2020, 6829-6833.
[8] Srivasatava, et al. http://jmlr.org/papers/volume15/srivastava14a.old/srivastava14a.pdf
[9] Bertsekas, Dimitri P. (2009). Convex Optimization Theory, Athena Scientific Pub., MIT Press, USA.
[10] Boyd, Stephen., Vandenberghe, Lieven. Convex Optimization, Cambridge University Press, USA.
[11] LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., and Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural Computation, 1 (4) 541–551.
[12] Hinton, G., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580.
[13] Glorot, X., Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), pages 249–256.
[14] Glorot, X., Bordes, A., Bengio, Y. (2011). Deep sparse rectifier neural networks. In: Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), pages 315–323.
[15] Zeiler, M. and Fergus, R., Stochastic pooling for regularization of deep convolutional neural networks. In Proceedings of the International Conference on Learning Representations, ICLR, 2013.
[16] Fabian Latorre, Paul Rolland and Volkan Cevher, Lipschitz Constant Estimation Of Neural Networks Via Sparse Polynomial Optimization, ICLR 2020
[17] Kingma, D., Ba, J. (2014). Adam: A method for stochastic optimization, arXiv:1412.6980.
[18] Manzil Zaheer, et al., Adaptive Methods for Nonconvex Optimization, 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.


Home | Aim & Scope | Editorial Board | Author Guidelines | Publisher | Subscription | Previous Issue | Contact Us |Upcoming Conferences|Sample Issues|Library Recommendation Form|

 

Copyright © 2011 dline.info