Comparison of Neural Network and Machine Learning Approaches in Prediction of Chronic Kidney Disease
DOI:
https://doi.org/10.47611/jsrhs.v10i3.1570Keywords:
Artificial Intelligence, Machine Learning, Neural Network, Supervised Learning Algorithms, Tensorflow, Keras, Scikit-learn, Chronic Kidney Disease, Prediction, Deep Learning, Decision Tree, Random ForestAbstract
The diagnosis of a disease to determine a specific condition is crucial in caring for patients and furthering medical research. The timely and accurate diagnosis can have important implications for both patients and healthcare providers. An earlier diagnosis allows doctors to consider more methods of treatment, allowing them to have a greater flexibility of tailoring their decisions, and ultimately improving the patient’s health. Additionally, a timely detection allows patients to have a greater control over their health and their decisions, allowing them to plan ahead. As advancements in computer science and technology continue to improve, these two factors can play a major role in aiding healthcare providers with medical issues. The emergence of artificial intelligence and machine learning can aid in addressing the challenge of completing timely and accurate diagnosis. The goal of this research work is to design a system that utilizes machine learning and neural network techniques to diagnose chronic kidney disease with more than 90% accuracy based on a clinical data set, and to do a comparative study of the performance of the neural network versus supervised machine learning approaches. Based on the results, all the algorithms performed well in prediction of chronic kidney disease (CKD) with more that 90% accuracy. The neural network system provided the best performance (accuracy = 100%) in prediction of chronic kidney disease in comparison with the supervised Random Forest algorithm (accuracy = 99%) and the supervised Decision Tree algorithm (accuracy = 97%).
Downloads
References or Bibliography
Shen, Jiayi et al. “Artificial Intelligence Versus Clinicians in Disease Diagnosis: Systematic Review.” JMIR medical informatics vol. 7,3 e10010. 16 Aug. 2019, doi:10.2196/10010
Dell’Aversana, Paolo “Artificial Neural Networks and Deep Learning”, December 2019
Domingos, Pedro. “A Few Useful Things to Know about Machine Learning.” Communications of the ACM, vol. 55, no. 10, 2012, pp. 78–87., doi:10.1145/2347736.2347755.
Ashari, Ahmad, et al. “Performance Comparison between Naïve Bayes, Decision Tree and k-Nearest Neighbor in Searching Alternative Design in an Energy Simulation Tool.” International Journal of Advanced Computer Science and Applications, vol. 4, no. 11, 2013, doi:10.14569/ijacsa.2013.041105.
Kirasich, Kaitlin, et al. “Random Forest vs Logistic Regression: Binary Classification for Heterogeneous Datasets.” SMU Data Science Review, vol. 1, no. 9, ser. 3, 2018. 3.
Dua, D. "UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]." Irvine, CA: University of California, School of Information and Computer Science. n.p.: n.p., 12. 28 Dec. 2020.
Pedregosa, Fabian, et al. “Scikit-Learn: Machine Learning in Python.” Journal of Machine Learning Research, vol. 12, no. 85, 2011, pp. 2825–2830.
Guyon, I. “A Scaling Law for the Validation-Set Training-Set Size Ratio.” Semantic Scholar, 1 Jan. 1997, www.semanticscholar.org/paper/A-Scaling-Law-for-the-Validation-Set-Training-Set-Guyon/452e6c05d46e061290fefff8b46d0ff161998677.
Published
How to Cite
Issue
Section
Copyright (c) 2021 Shreya Nag; Nimitha Jammula
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright holder(s) granted JSR a perpetual, non-exclusive license to distriute & display this article.