As Artificial Intelligence as a field has made advances in recent years, different issues surrounding it's usage in a variety of fields have come to the forefront of debate. Ethical concerns can range from the use of Artificial Intelligence in Warfare to self-driving cars and everything in-between. This guide is intended as a jumping off point for research into the issues.
(Video by TED)
Artificial Intelligence tools require high energy consumption, particularly when being trained. The data centers in which these AI tools are located can required an immense amount of cooling which can require vast amounts of water. AI tools during their rise in popularity has created more eWaste and other byproducts of their production. Experts are concerned with these environmental impacts as well as the impact these AI tools could have on the consumption of rare earth minerals. To offset the environmental impacts, scholars are attempting to use Artificial Intelligence to diagnosis various environmental problems.
Sources: AI has an environmental problem. here’s what the world can do about that. UNEP. (2024, September 21). https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about
Bishop, B. A., & Robbins, L. J. (2024). Using machine learning to identify indicators of rare earth element enrichment in sedimentary strata with applications for metal prospectivity. Journal of Geochemical Exploration, 258, 107388. https://doi.org/10.1016/j.gexplo.2024.107388
Chen, J., Huang, S., BalaMurugan, S., & Tamizharasi, G. S. (2021). Artificial intelligence based E-waste management for environmental planning. Environmental Impact Assessment Review, 87, 106498. https://doi.org/10.1016/j.eiar.2020.106498
Ren, S., & Wierman, A. (2024, July 15). The uneven distribution of Ai’s environmental impacts. Harvard Business Review. https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts
Artificial Intelligence tools can amplify existing biases found in the data that the tools are trained on. The algorithms themselves can be biased. The question of why certain material is presented over other materials can be a hard one to answer as the algorithms can be proprietary of each individual AI company. AI has documented cases of bias being an issue with regards to facial recognition, hiring and body language analyzers.
Sources: Lytton, C. (2024, February 16). Ai hiring tools may be filtering out the best job applicants. BBC News. https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination
Nazer, L. H., Zatarah, R., Waldrip, S., Ke, J. X., Moukheiber, M., Khanna, A. K., Hicklen, R. S., Moukheiber, L., Moukheiber, D., Ma, H., & Mathur, P. (2023). Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health, 2(6). https://doi.org/10.1371/journal.pdig.0000278
Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder‐Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., … Staab, S. (2020). Bias in data‐driven Artificial Intelligence Systems—an introductory survey. WIREs Data Mining and Knowledge Discovery, 10(3). https://doi.org/10.1002/widm.1356
Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. https://doi.org/10.6028/nist.sp.1270
Experts are concerned that Artificial Intelligence tools will serve to widen the "Digital Divide." AI infrastructure requires a lot of energy and specialists to run it. Many scholars around the world already suffer from lack of access to knowledge. Critics argue that AI tools will lead to further wealth disparity, potential job displacement, and other existing inequalities being exasperated if this issue is not addressed as the technology matures.
Sources: Esther Lee Rosen, E. L. Y., Rania Gihleb, O. G., & Katya Klinova, A. K. (2024a, July 9). Ai’s impact on income inequality in the US. Brookings. https://www.brookings.edu/articles/ais-impact-on-income-inequality-in-the-us/
Luttrell, R., Wallace, A., McCollough, C., & Lee, J. (2020). The digital divide: Addressing Artificial Intelligence in communication education. Journalism & Mass Communication Educator, 75(4), 470–482. https://doi.org/10.1177/1077695820925286
Perrigo, B. (2023, January 18). OpenAI used Kenyan workers on less than $2 per hour: Exclusive. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/
Skare, M., Gavurova, B., & Blažević Burić, S. (2024). Artificial Intelligence and wealth inequality: A comprehensive empirical exploration of socioeconomic implications. Technology in Society, 79, 102719. https://doi.org/10.1016/j.techsoc.2024.102719
Several privacy-related issues have been raised by academics regarding the use of large scale data collection, the lack of transparency around how that data is being used and how that data is being shared with third parties. Privacy laws have yet to keep up with the widespread availability of AI tools. Using someone's likeness, Artificial Intelligence can be used to create convincing deepfakes which can lead to misinformation. Many institutions are starting to implement bans on what type of information employees can put into systems such as ChatGPT. Experts are concerned that Artificial Intelligence tools will worsen protections for intellectual property laws.
Sources: Burgess, M. (2023, April 4). CHATGPT has a big privacy problem. Wired. https://www.wired.com/story/italy-ban-chatgpt-privacy-gdpr/
Kop, M. (2020). Al & Intellectual Property: Towards an Articulated Public Domain. Texas Intellectual Property Law Journal, 28(3), 297–342. https://research.ebsco.com/linkprocessor/plink?id=a618826d-f648-3787-ae7b-4288c8464d92
Murdoch, B. (2021). Privacy and artificial intelligence: Challenges for protecting health information in a new era. BMC Medical Ethics, 22(1). https://doi.org/10.1186/s12910-021-00687-3
Zhang, Y., Wu, M., Tian, G. Y., Zhang, G., & Lu, J. (2021). Ethics and privacy of Artificial Intelligence: Understandings from Bibliometrics. Knowledge-Based Systems, 222, 106994. https://doi.org/10.1016/j.knosys.2021.106994
Artificial Intelligence has been used in warfare in recent conflicts. AI can be used to generate target lists, fly semi-autonomous drones, help make battlefield decisions, and assist in the analysis of satellite imagery. Artificial intelligence is now taking on a more important role in electronic warfare. Experts are exploring what the ethics of it's usage and what are acceptable use cases in situations regarding civilian casualities. In more far off looking research, Ethicists are concerned about the use of these systems which may lead to the development of more fully autonomous weapon systems.
Sources: Bendett, S. (2023, July 20). Roles and implications of AI in the Russian-ukrainian conflict. Center for A New American Security. https://www.cnas.org/publications/commentary/roles-and-implications-of-ai-in-the-russian-ukrainian-conflict
Brumfiel, G. (2023, December 14). Israel is using an AI system to find targets in Gaza. experts say it’s just the start. NPR. https://www.npr.org/2023/12/14/1218643254/israel-is-using-an-ai-system-to-find-targets-in-gaza-experts-say-its-just-the-st
Hunder, M. (2024, October 31). Ukraine rolls out dozens of AI systems to help its drones hit targets. Reuters. https://www.reuters.com/world/europe/ukraine-rolls-out-dozens-ai-systems-help-its-drones-hit-targets-2024-10-31/
P. Sharma, K. K. Sarma and N. E. Mastorakis, "Artificial Intelligence Aided Electronic Warfare Systems- Recent Trends and Evolving Applications," in IEEE Access, vol. 8, pp. 224761-224780, 2020. https://doi.org/10.1109/ACCESS.2020.3044453