Applying adversarial networks to increase the data efficiency and reliability of Self-Driving Cars
By Aakash Kumar
DOI:
https://doi.org/10.47611/jsrhs.v11i3.2949Keywords:
Artificial Intelligence, Self Driving, Computer Vision, Generative Adversarial NetworksAbstract
Convolutional Neural Networks (CNNs) are vulnerable to misclassifying images when small perturbations are present. With the increasing prevalence of CNNs in self-driving cars, it is vital to ensure these algorithms are robust to prevent collisions from occurring due to failure in recognizing a situation. In the Adversarial Self-Driving framework, a Generative Adversarial Network is implemented to generate realistic perturbations in an image that cause a classifier CNN to misclassify data. This perturbed data is then used to train the classifier CNN further. The Adversarial Self-driving framework is applied to an image classification algorithm to improve the classification accuracy on perturbed images and is later applied to train a self-driving car to drive in a simulation. A small-scale self-driving car is also built to drive around a track and classify signs. The Adversarial Self-driving framework produces perturbed images through learning a dataset, as a result removing the need to train on significant amounts of data. Experiments demonstrate that the Adversarial Self-Driving framework identifies situations where CNNs are vulnerable to perturbations and generates new examples of these situations for the CNN to train on. The additional data generated by the Adversarial Self-driving framework provides sufficient data for the CNN to generalize to the environment. Therefore, it is a viable tool to increase the resilience of CNNs to perturbations. Particularly, in the real-world self-driving car, the application of the Adversarial Self-Driving framework resulted in an 18 % increase in accuracy, and the simulated self-driving model had no collisions in 30 minutes of driving.
Downloads
References or Bibliography
“Biases in AI Systems” Communications of the ACM, 25 Aug. 2021, https://cacm.acm.org/magazines/2021/8/254310-biases-in-ai-systems/fulltext. Accessed 4 Nov. 2021.
Bojarski, Mariusz, et al. End to End Learning for Self-Driving Cars. Apr. 2016. arxiv.org, https://arxiv.org/abs/1604.07316v1.
Carla Documentation.” CARLA Simulator, https://carla.readthedocs.io/en/0.9.12/, accessed November 3, 2021
Goodfellow, Ian J., et al. “Generative Adversarial Networks.” ArXiv:1406.2661 [Cs, Stat], June 2014. arXiv.org, http://arxiv.org/abs/1406.2661.
Papernot, Nicolas, et al. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. Nov. 2015. arxiv.org, https://arxiv.org/abs/1511.04508v2.
P. Sermanet and Y. LeCun, "Traffic sign recognition with multi-scale Convolutional Networks," The 2011 International Joint Conference on Neural Networks, 2011, pp. 2809-2813, doi: 10.1109/IJCNN.2011.6033589.
Schulman, John, et al. “Proximal Policy Optimization Algorithms.” ArXiv:1707.06347 [Cs], Aug. 2017. arXiv.org, http://arxiv.org/abs/1707.06347.
Xiao, Chaowei, et al. “Generating Adversarial Examples with Adversarial Networks.” Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, 2018, pp. 3905–11. DOI.org (Crossref), https://doi.org/10.24963/ijcai.2018/543.
Published
How to Cite
Issue
Section
Copyright (c) 2022 Aakash Kumar; Joe Kim
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright holder(s) granted JSR a perpetual, non-exclusive license to distriute & display this article.