How the Government Can Mitigate the Emerging Risks of Artificial Intelligence

Authors

  • Atharv Joshi St. Francis High School
  • Michael Chechelnitsky St. Francis High School
  • Michael Chechelnitsky St. Francis High School

DOI:

https://doi.org/10.47611/jsrhs.v13i1.6283

Keywords:

AI, Artificial Intelligence, Risks from AI, Artificial General Intelligence, Government role

Abstract

The astonishing pace of development in Artificial Intelligence (AI) has been welcomed for the benefits it brings to society and businesses. However, the malicious use of AI has raised alarms in academia and the government. The unbridled growth of AI technology has been portrayed as a risk to humankind. [7] The initial impact of the misuse of AI is shown in the form of deepfakes and sophisticated phishing attacks. The pace of AI development is unprecedented, and other issues that will come up are unknown. In this paper, I will assess how the “known unknown” risks of artificial intelligence should be handled and propose ways for the government to get involved in a constructive way, such that the risks of AI can be mitigated. I draw lessons from emergency preparedness and establish a need to have government involvement at the federal level to mitigate the growing risks from AI. This organization would bring other government agencies, businesses, and citizens together to take steps to deal with the risks from AI. Taking measures to counter deepfakes, facilitating the learning of AI models, and handling biases in AI models are three important goals for the government’s agency.

Downloads

Download data is not yet available.

Author Biographies

Michael Chechelnitsky, St. Francis High School

Math teacher

Michael Chechelnitsky, St. Francis High School

Math teacher

References or Bibliography

Statement on AI Risk: Cais. Statement on AI Risk | CAIS. (n.d.). https://www.safe.ai/statement-on-ai-risk

Gates, B. (2023, July 11). The risks of AI are real but manageable. gatesnotes.com. https://www.gatesnotes.com/The-risks-of-AI-are-real-but-manageable

WP Company. (2023, May 27). Opinion | congress wants to regulate AI. here’s where to start. The Washington Post. https://www.washingtonpost.com/opinions/2023/05/26/ai-regulation-congress-risk/

CFPB acts to protect the public from black-box credit models using complex algorithms. Consumer Financial Protection Bureau. (2022, May 26). https://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-models-using-complex-algorithms/

California, S. of. (n.d.). California governor’s Office of Emergency Services: California’s emergency services leader. California Governor’s Office of Emergency Services | California’s Emergency Services Leader. https://www.caloes.ca.gov/

N. Brown and T. Sandholm, “Superhuman AI for multiplayer poker,” Science, vol. 365, no. 6456, pp. 885–890 Aug. 2019. DOI: 10.1126/science.aay2400.

Cellan-Jones, R. (2014, December 2). Stephen Hawking warns artificial intelligence could end mankind. BBC News. https://www.bbc.com/news/technology-30290540

Brooks, R. (2021, October 20). The seven deadly sins of AI predictions. MIT Technology Review. https://www.technologyreview.com/2017/10/06/241837/the-seven-deadly-sins-of-ai-predictions/

Definition of Artificial General Intelligence (AGI) - gartner information technology glossary. Gartner. (n.d.). https://www.gartner.com/en/information-technology/glossary/artificial-general-intelligence-agi

Gartner report G00792868 Published May 31 2023 “The Future of AI: Reshaping Society”

Ramesh R, Saluja S, “How Artificial Intelligence can aid screening and detecting lung cancer in lung cancer patients”. Journal of Student Research, vol 12, issue 2 (2023) https://doi.org/10.47611/jsrhs.v12i2.4238

Chesney, Robert and Citron, Danielle Keats, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security (July 14, 2018). 107 California Law Review 1753 (2019), U of Texas Law, Public Law Research Paper No. 692, U of Maryland Legal Studies Research Paper No. 2018-21, Available at SSRN: https://ssrn.com/abstract=3213954 or http://dx.doi.org/10.2139/ssrn.3213954

Schmelzer, R. (2019, July 24). Understanding Explainable AI. Forbes. https://www.forbes.com/sites/cognitiveworld/2019/07/23/understanding-explainable-ai/?sh=4937bd697c9e

Oversight of A.I.: Rules for Artificial Intelligence | United States Senate Committee on the Judiciary. (2023, May 16). www.judiciary.senate.gov. https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence

Manyika, J., Silberg, J., & Presten, B. (2019, October 25). What Do We Do About the Biases in AI? Harvard Business Review; Harvard Business Review. https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

DEEP Increasing Threat of AKE Identities. (n.d.). https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf

Silverman, C. (2018, April 17). How To Spot A DeepFake Like The Barack Obama-Jordan Peele Video. BuzzFeed. https://www.buzzfeed.com/craigsilverman/obama-jordan-peele-deepfake-video-debunk-buzzfeed

Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023

Published

02-28-2024

How to Cite

Joshi, A., Chechelnitsky, M., & Chechelnitsky, M. (2024). How the Government Can Mitigate the Emerging Risks of Artificial Intelligence. Journal of Student Research, 13(1). https://doi.org/10.47611/jsrhs.v13i1.6283

Issue

Section

HS Review Articles