Investigating Students' Attitudes & Trust in AI During COVID-19
DOI:
https://doi.org/10.47611/jsrhs.v12i3.5015Keywords:
artificial intelligence, COVID-19, attitudes, university students, trustAbstract
Artificial Intelligence (AI) has been a part of society for a considerable amount of time since its formal foundation in 1956 and can be used to perform cognitive tasks at a level similar or greater than humans. In our modern era of advanced intelligence machines and processing capabilities, AI is being pushed further into society at a fast rate. With advancing AI like ChatGPT and self-driving vehicles, it is important for society to understand the implications of trusting this technology. Studies have discussed AI being used in the medical field to combat mental health issues and identify attitudes. However, there is a gap in research over adults' levels of trust towards AI and how those feelings have evolved over the course of COVID-19. This research paper explores how trust college/university undergraduate/graduate adults evolved over the COVID-19 pandemic and aims to fill that gap. To address the general attitudes and levels of trust towards AI, a mixed-method quantitative and qualitative study was conducted utilizing a survey. The survey inquired about general attitudes using the General Attitudes towards AI Scale (GAAIS) and open-ended questions to measure levels of trust in relation to the pandemic. It was concluded that while participants demonstrated a positive attitude towards AI (70.4% ), most participants had low levels of trust reflected in their fears and concerns of future AI implementation. A high percentage of positivity towards AI coupled with low levels of trust indicates a complex attitude towards artificial intelligence.
Downloads
References or Bibliography
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., et al. (2018). The moral machine experiment. Nature 563, 59–64. doi: 10.1038/s41586-018-0637-6
Anderson, J., Rainie, L., & Luchsinger, A. (2018). Artificial intelligence and the future of humans. Pew Research Center. December https://www.elon.edu/docs/e-web/imagining/surveys/2018_survey/AI_and_the_Future_of_Humans_12_10_18.pdf
Brauner, P., Hick, A., Philipsen, R., & Ziefle, M. (2023). What does the public think about artificial intelligence?—A criticality map to understand bias in the public perception of AI. Frontiers in Computer Science, 5. https://doi.org/10.3389/fcomp.2023.1113903
Brynjolfsson, E., and Mitchell, T. (2017). What can machine learning do? Workforce implications. Science 358, 1530–1534. doi: 10.1126/science.aap8062
Bochniarz, K., Czerwiński, S. K., Sawicki, A., & Atroszko, P. (2022). Attitudes to AI among high school students: Understanding distrust towards humans will not help us understand distrust towards AI. Personality and Individual Differences, 185, 111299. https://doi.org/10.1016/j.paid.2021.111299
Boden, M. A. (2016). AI: Its Nature and Future. Retrieved from https://books.google.com/books/about/AI.html?id=yDQTDAAAQBAJ
Carter, W. A., & Crumpler, W. (2022, November). Smart Money on Chinese Advances in AI. Retrieved from https://www.csis.org/analysis/smart-money-chinese-advances-ai
Flowers, J. C. (2019). Strong and Weak AI: Deweyan Considerations. National Conference on Artificial Intelligence. Retrieved from http://ceur-ws.org/Vol-2287/paper34.pdf
Gessl, A. S., Schl ̈ogl, S., & Mevenkamp, N. (2019). On the perceptions and acceptance of artificially intelligent robotics and the psychology of the future elderly. Behaviour & Information Technology. https://doi.org/10.1080/0144929X.2019.1566499
Grace, K., Salvatier, J., Dafoe, A., Zhang, B., and Evans, O. (2018). When will AI exceed human performance? Evidence from AI experts. J. Artif. Intell. Res. 62, 729–754. doi: 10.1613/jair.1.11222
Gunkel, D. J. (2012). The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge, MA: MIT Press.
Hick, A., and Ziefle, M. (2022). “A qualitative approach to the public perception of AI,” in IJCI Conference Proceedings, eds D. C. Wyld et al., 01–17.
Ikkatai, Y., Hartwig, T., Takanashi, N., and M., Y. H. (2022). Segmentation of ethics, legal, and social issues (ELSI) related to AI in Japan, the united states, and Germany. AI Ethics doi: 10.1007/s43681-022-00207-y
Kok, J. N., Boers, E. J., Kosters, W. A., Van der Putten, P., & Poel, M. (2009). Artificial intelligence: definition, trends, techniques, and cases. Artificial intelligence, 1, 270-299.
Lecun, Y., Bengio, Y., and Hinton, G. (2015). Deep Learning. Nature 521, 436–444. doi: 10.1038/nature14539
Lewis, P., & Marsh, S. (2021). What is it like to trust a rock? A functionalist perspective on trust and trustworthiness in artificial intelligence. Cognitive Systems Research, 72, 33–49. https://doi.org/10.1016/j.cogsys.2021.11.001
Liehner, G. L., Brauner, P., Schaar, A. K., and Ziefle, M. (2021). Delegation of moral tasks to automated agents The impact of risk and context on trusting a machine to perform a task. IEEE Trans. Technol. Soc. 3, 46–57. doi: 10.1109/TTS.2021.3118355
Lindenberg, G. (2020). Ludzko ́s ́c poprawiona. Jak najbli ̇zsze lata zmienią ́swiat, w kt ́orym ̇zyjemy. Krak ́ow: Wydawnictwo Otwarte. https://books.google.com/books?hl=en&lr=&id=xq9yDwAAQBAJ&oi=fnd&pg=PT3&ots=pmuEAe3P7a&sig=ukMFJnggKLqP1GZMFhG_wh5UQdg#v=onepage&q&f=false
Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46–60. https://doi.org/10.1016/j.futures.2017.03.006
McCarthy, J., Minsky, M. L., Rochester, N., and Shannon, C. E. (2006). A proposal for the dartmouth summer research project on artificial intelligence (August 31, 1955). AI Mag. 27, 12–12. doi: 10.1609/aimag.v27i4.1904
Naderifar, M., Goli, H., & Ghaljaie, F. (2017). Snowball Sampling: A Purposeful Method of Sampling in Qualitative Research. گام های توسعه در آموزش پزشکی, 14(3). https://doi.org/10.5812/sdme.67670
Olari, V., and Romeike, R. (2021). “Addressing AI and data literacy in teacher education: a review of existing educational frameworks,” in The 16th Workshop in Primary and Secondary Computing Education WiPSCE '21 (New York, NY: Association for Computing Machinery).
Olhede, S. C., & Wolfe, P. J. (2019). Artificial intelligence and the future of work: Will our jobs be taken by machines? Significance, 16(1), 6–7. https://doi.org/10.1111/j.1740-9713.2019.01224.x
Rainie, L., Anderson, J., Vogels, E. A., & Atske, S. (2021, June 21). Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within the Next Decade. Pew Research Center: Internet, Science & Tech. Retrieved from https://www.pewresearch.org
Rossi, F. (2018a). Building Trust in Artificial Intelligence. Journal of International Affairs, 72(1), 127. Retrieved from https://www.questia.com/library/journal/1G1-583489792/building-trust-in-artificial-intelligence
Schepman, A., & Rodway, P. (2020b). Initial validation of the general attitudes towards Artificial Intelligence Scale. Computers in Human Behavior Reports, 1, 100014. https://doi.org/10.1016/j.chbr.2020.100014
Schepman, A., & Rodway, P. (2022). The General Attitudes towards Artificial Intelligence Scale (GAAIS): Confirmatory Validation and Associations with Personality, Corporate Distrust, and General Trust. International Journal of Human-computer Interaction, 1–18. https://doi.org/10.1080/10447318.2022.2085400
Shah, H., Shah, S., Tanwar, S., Gupta, R., & Kumar, N. (2021). Fusion of AI techniques to tackle COVID-19 pandemic: models, incidence rates, and future trends. Multimedia Systems, 28(4), 1189–1222. https://doi.org/10.1007/s00530-021-00818-1
Shinde, R. C. S., & Thorat, B. (2022). Applications of AI in Covid Detection, Prediction and Vaccine Development. International Journal for Research in Applied Science and Engineering Technology, 10(4), 2634–2637. https://doi.org/10.22214/ijraset.2022.41494
Siau, K., & Wang, W. (2018). Building Trust in Artificial Intelligence, Machine Learning, and Robotics. Cutter Business Technology Journal, 31(2), 47–53.
Silverman, D. (2020). Collecting qualitative data during a pandemic. Communication in Medicine, 17(1). https://doi.org/10.1558/cam.19256
Sindermann, C., Yang, H., Elhai, J. D., Yang, S., Quan, L., Li, M., & Montag, C. (2021). Acceptance and Fear of Artificial Intelligence: associations with personality in a German and a Chinese sample. Discover Psychology, 2(1). https://doi.org/10.1007/s44202-022-00020-y
SITNFlash. (2020, April 23). The History of Artificial Intelligence - Science in the News. Retrieved from https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
Smith, A., Anderson, J., & Smith, A. (2022, September 15). AI, Robotics, and the Future of Jobs. Pew Research Center: Internet, Science & Tech. Retrieved from https://www.pewresearch.org
Thormundsson, B. (2022). Topic: Artificial intelligence (AI) worldwide. Statista. https://www.statista.com/topics/3104/artificial-intelligence-ai-worldwide/#topicOverview
Tate, D. (2021). Trust, Trustworthiness, and Assurance of AI and Autonomy. ResearchGate. Retrieved from https://www.researchgate.net/publication/355479055_Trust_Trustworthiness_and_Assurance_of_AI_and_Autonomy
Vaishya, R., Javaid, M., Khan, I. H., and Haleem, A. (2020). Artificial intelligence (AI) applications for COVID-19 pandemic. Diabetes Metabolic Syndrome Clin. Res. Rev. 14, 337–339. doi: 10.1016/j.dsx.2020.04.012
West, D. M. (2018). The Future of Work: Robots, AI, and Automation. Washington, DC: Brookings Institution Press.
Published
How to Cite
Issue
Section
Copyright (c) 2023 Aasiya Arif
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright holder(s) granted JSR a perpetual, non-exclusive license to distriute & display this article.