Artificial Intelligence brings immense potential for innovation and progress, but it also raises significant ethical concerns that must be carefully considered. AI ethics is the study and application of principles that guide the responsible development and use of artificial intelligence. It addresses the potential social, legal, and moral implications of AI technologies to prevent harm and promote equitable outcomes.
AI ethics is a crucial component of AI literacy. As AI systems become more integrated into our daily lives and decision-making processes, it's crucial to address these ethical challenges to ensure responsible development and use. Below are just a few of the key ethical considerations in AI.
Image sourced from Canva.com
A significant limitation of AI is the bias that can be embedded in the products it generates. Fed immense amounts of data and text available on the internet, these large language model systems are trained to simply predict the most likely sequence of words in response to a given prompt, and will therefore reflect and perpetuate the biases inherent in the inputted internet information. An additional source of bias lies in the fact that some generative AI tools utilize reinforcement learning with human feedback (RLHF), with the caveat that the human testers used to provide this feedback are themselves non-neutral. Accordingly, generative AI like ChatGPT is documented to have provided output that is socio-politically biased, occasionally even containing sexist, racist, or otherwise offensive information.
The Social Dilemma – Bonus Clip: The Discrimination Dilemma by Exposure Labs
How I'm fighting bias in algorithms | Joy Buolamwini
Attribution: Georgetown University Library, University of Texas Libraries; Open.AI.
Currently, copyright protection is not granted to works created by Artificial Intelligence. The U.S. Copyright Office has issued guidance that explains the requirement for human authorship to be granted copyright protection and provides information to creators working in tandem with AI tools on how to effectively and correctly registered their works.
US Copyright Office and Artificial Intelligence – "The Copyright Office has launched an initiative to examine the copyright law and policy issues raised by artificial intelligence (AI) technology, including the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training."
Copyright Registration Guidance – Guidance for registering Works Containing Material Generated by Artificial Intelligence by the U.S. Copyright Office.
ChatGPT and Generative AI Are Hits! Can Copyright Law Stop Them? By Bloomberg Law
Argument A. No it's copyright violation
This will affect not only OpenAI, but Google, Microsoft, and Meta, since they all use similar methods to train their models.
Argument B. Yes, it's fair use
“Done right, copyright law is supposed to encourage new creativity. Stretching it to outlaw tools like AI image generators—or to effectively put them in the exclusive hands of powerful economic actors who already use that economic muscle to squeeze creators—would have the opposite effect.”
Other countries
The Israel Ministry of Justice has issued an opinion: the use of copyrighted materials in the machine learning context is permitted under existing Israeli copyright law.
Several corporations have offered to pay legal bills of users of their tools
Adobe, Google, Microsoft, and Anthropic (for Claude) have offered to pay any legal bills from lawsuits against users of their tools.
Can you copyright something you made with AI?
Open AI says:
"... you own the output you create with ChatGPT, including the right to reprint, sell, and merchandise – regardless of whether output was generated through a free or paid plan."
The U.S. Copyright Office says:
The term “author" ... excludes non-humans.
But, if you select or arrange AI-generated material in a sufficiently creative way... In these cases, copyright will only protect the human-authored aspects of the work. For an example, see this story of a comic book. The U.S. Copyright Office determined that the selection and arrangement of the images IS copyrightable, but not the images themselves (made with generative AI).
In other countries, different rulings may apply, see:
Chinese Court’s Landmark Ruling: AI Images Can be Copyrighted
Attribution:
Generative AI tools require a significant amount of computational processing power to function, which is provided by high-performance servers housed in physical data centers located across the country. These centers require massive amounts of electricity to keep tools operational, as well as water to keep the servers cool. Many AI companies have not revealed just how much electricity and water are used by their tools, or how much will be needed in the future. As such, there are significant unanswered questions about the environmental costs of keeping generative AI tools functional.
How Can Scientists Use Artificial Intelligence (AI) to Improve Predictions of River Water Quality?
The Staggering Ecological Impacts of Computation and the Cloud
AI for Earth: How NASA’s Artificial Intelligence and Open Science Efforts Combat Climate Change
Image sourced on Canva.com
Attribution: Olympic College; University of Texas Libraries
"As we move into a detailed analysis of AI’s role in modern society, the focus shifts to how this technology, while heralded as a tool of efficiency and progress, actually reproduces and exacerbates inequalities. This is evident in the labor practices within the tech industry, where AI development often relies on underpaid and undervalued workers from marginalized communities, perpetuating a cycle of exploitation and exclusion."
AI still needs human intervention to function properly, but this necessary labor is often hidden. For example, ChatGPT uses prompts entered by users to train its models. Since these prompts are also used to train its subscription model, many consider this unpaid labor.
Taylor & Francis recently signed a $10 million deal to provide Microsoft with access to data from approximately 3,000 scholarly journals. Authors in those journals were not consulted or compensated for the use of their articles. Some argue that using scholarly research to train generative AI will result in better AI tools, but authors have expressed concern about how their information will be used, including whether the use by AI tools will negatively impact their citation numbers
In a more extreme case, investigative journalists discovered that OpenAI paid workers in Kenya, Uganda and India only $1-$2 per hour to review data for disturbing, graphic and violent images. In improving their product, the company exposed their underpaid workers to psychologically scarring content. One worker referred to the work as “torture”.
Attribution:
There are ongoing privacy concerns and uncertainties about how AI systems harvest personal data from users.Users may not realize that the system is also harvesting information like the user’s IP address and their activity while using the service. This is an important consideration when using AI in an educational context, as some students may not feel comfortable having their personal information tracked and saved.
Additionally, OpenAI may share aggregated personal information with third parties in order to analyze usage of ChatGPT. While this information is only shared in aggregate after being de-identified (i.e. stripped of data that could identify users), users should be aware that they no longer have control of their personal information after it is provided to a system like ChatGPT.
Attribution: University of Texas Libraries
The increasingly common presence of AI in day-to-day life has heightened the need for transparency in its use: people should be aware of when they are interacting with artificial intelligence, who created the AI they're using, and for what purpose.
Advances in generative AI have made transparency a particular concern. Recent versions of software like ChatGPT can create text in response to a prompt that is indistinguishable from human-produced writing. In academia, this creates concerns over academic integrity in assignments, and is leading to a reevaluation of the types of writing assigned to students. In journalism, some online outlets have already begun publishing articles generated by AI. Given the issues with accuracy in generative AI, a lack of transparency in its use in journalism leads to lower confidence that what we're reading is correct.
World Wide Web Consortium (W3C) standards for ethical machine learning.
UN principles and policies related to AI.
IBM article.
Oxford institute researching ethics and governance in AI.
A search engine that determines whether your images have been used in an AI data set.
Attribution: Duquesne University- Gumberg Library; Willamette University Libraries; Imaged sourced from Wikicommons.
Reference 909-384-8289 • Circulation 909-384-4448