Preprint / Version 1

Machine Learning Applied to Image Classification


  • Zuilho Segundo Federal Center for Technological Education of Rio de Janeiro CEFET/RJ



Regularization, Machine Learning, Image Classification


Machine Learning is a field of computer science with severe applications in the modern world. One of the main applications is the use of neural networks in computer vision, recognizing faces in a photo, analyzing x-rays, or identifying an artwork. This paper aims to explore the concepts of machine learning, supervised learning, and neural networks, applying the learned concepts in the CIFAR10 dataset, which is a problem of image classification, trying to build a neural network with high accuracy. To avoid overfitting we proposed trying to use two different methods of regularization: L2 and dropout. We find that using dropout regularization gives the best accuracy on our model when compared with the L2 regularization.

References or Bibliography

IBM Cloud Education. (2021, March 3). Overfitting. Retrieved from

IBM Cloud Education. (2020, August 17). Neural Networks. Retrieved from

IBM Cloud Education. (2020, August 19). Supervised Learning. Retrieved from

Prasad, Ashu. (2020, June 10). Understanding Regularization in Machine Learning. Retrieved from

Brownlee, J. (2016, June 20). Dropout Regularization in Deep Learning Models With Keras. Retrieved from

Nagpal, Anuja. (2017, October 13). L1 and L2 Regularization Methods. Retrieved from

Das, Angel. (2020, August 20). Convolution Neural Network for Image Processing — Using Keras. Retrieved from

Saha, Sumi. (2018, December 15). A Comprehensive Guide to Convolutional Neural Networks — the ELI5 way. Retrieved from

Brownlee, Jason. (2018, December 3). A Gentle Introduction to Dropout for Regularizing Deep Neural Networks. Retrieved from

Krizhevsky, A. (2009). Learning Multiple Layers of Features from Tiny Images.

Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov. (2014, January 01). Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15, 2014.