Skip to Main Content

AI Literacy

Navigating AI for Students and Faculty

a motherboard with the word Ethics.Overview of AI Ethics

Artificial Intelligence brings immense potential for innovation and progress, but it also raises significant ethical concerns that must be carefully considered. AI ethics is the study and application of principles that guide the responsible development and use of artificial intelligence. It addresses the potential social, legal, and moral implications of AI technologies to prevent harm and promote equitable outcomes.

AI ethics is a crucial component of AI literacy. As AI systems become more integrated into our daily lives and decision-making processes, it's crucial to address these ethical challenges to ensure responsible development and use. Below are just a few of the key ethical considerations in AI.

Image sourced from Canva.com

Ethical Issues

Algorithmic Bias

A significant limitation of AI is the bias that can be embedded in the products it generates. Fed immense amounts of data and text available on the internet, these large language model systems are trained to simply predict the most likely sequence of words in response to a given prompt, and will therefore reflect and perpetuate the biases inherent in the inputted internet information. An additional source of bias lies in the fact that some generative AI tools utilize reinforcement learning with human feedback (RLHF), with the caveat that the human testers used to provide this feedback are themselves non-neutral. Accordingly, generative AI like ChatGPT is documented to have provided output that is socio-politically biased, occasionally even containing sexist, racist, or otherwise offensive information.

Related Recommendations  

  • Meticulously fact-check all of the information produced by generative AI, including verifying the source of all citations the AI uses to support its claims.
  • Critically evaluate all AI output for any possible biases that can skew the presented information. 
  • Avoid asking the AI tools to produce a list of sources on a specific topic as such prompts may result in the tools fabricating false citations. 
  • When available, consult the AI developers' notes to determine if the tool's information is up-to-date.
  • Always remember that generative AI tools are not search engines--they simply use large amounts of data to generate responses constructed to "make sense" according to common cognitive paradigms.

The Social Dilemma – Bonus Clip: The Discrimination Dilemma by Exposure Labs

How I'm fighting bias in algorithms | Joy Buolamwini


Attribution: Georgetown University LibraryUniversity of Texas Libraries; Open.AI.

 

Is Content Created by Generative AI Tool Copyrightable?

Currently, copyright protection is not granted to works created by Artificial Intelligence. The U.S. Copyright Office has issued guidance that explains the requirement for human authorship to be granted copyright protection and provides information to creators working in tandem with AI tools on how to effectively and correctly registered their works.

US Copyright Office and Artificial Intelligence – "The Copyright Office has launched an initiative to examine the copyright law and policy issues raised by artificial intelligence (AI) technology, including the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training."

Copyright Registration Guidance – Guidance for registering Works Containing Material Generated by Artificial Intelligence by the U.S. Copyright Office.


Generative AI Copyright Lawsuits – select links

ChatGPT and Generative AI Are Hits! Can Copyright Law Stop Them? By Bloomberg Law


Copyright Issues

The input to generative AI

  • Should it be considered fair use? This is widely debated.

Argument A. No it's copyright violation

  • Copyright law is AI's 2024 battlefield - "Copyright owners have been lining up to take whacks at generative AI like a giant piñata woven out of their works. 2024 is likely to be the year we find out whether there is money inside," James Grimmelmann, professor of digital and information law at Cornell, tells Axios. "Every time a new technology comes out that makes copying or creation easier, there's a struggle over how to apply copyright law to it." 

This will affect not only OpenAI, but Google, Microsoft, and Meta, since they all use similar methods to train their models.
Argument B. Yes, it's fair use

“Done right, copyright law is supposed to encourage new creativity. Stretching it to outlaw tools like AI image generators—or to effectively put them in the exclusive hands of powerful economic actors who already use that economic muscle to squeeze creators—would have the opposite effect.”

Other countries

Several corporations have offered to pay legal bills of users of their tools
AdobeGoogle,  Microsoft, and Anthropic (for Claude) have offered to pay any legal bills from lawsuits against users of their tools.

The output of generative AI

Can you copyright something you made with AI?
Open AI says:
"... you own the output you create with ChatGPT, including the right to reprint, sell, and merchandise – regardless of whether output was generated through a free or paid plan."

The U.S. Copyright Office says:
The term “author" ... excludes non-humans.

But, if you select or arrange AI-generated material in a sufficiently creative way... In these cases, copyright will only protect the human-authored aspects of the work. For an example, see this story of a comic book. The U.S. Copyright Office determined that the selection and arrangement of the images IS copyrightable, but not the images themselves (made with generative AI).

In other countries, different rulings may apply, see:
Chinese Court’s Landmark Ruling: AI Images Can be Copyrighted


Attribution: 

"Training a single AI model can emit as much carbon as five cars in their lifetimes." Karen Hao

Generative AI tools require a significant amount of computational processing power to function, which is provided by high-performance servers housed in physical data centers located across the country. These centers require massive amounts of electricity to keep tools operational, as well as water to keep the servers cool. Many AI companies have not revealed just how much electricity and water are used by their tools, or how much will be needed in the future. As such, there are significant unanswered questions about the environmental costs of keeping generative AI tools functional. 

Select Resources on AI and the Environment

Image sourced on Canva.com

How AI and data centers impact climate change By CBS Mornings.


Attribution: Olympic College; University of Texas Libraries

a cloud of computer servers."As we move into a detailed analysis of AI’s role in modern society, the focus shifts to how this technology, while heralded as a tool of efficiency and progress, actually reproduces and exacerbates inequalities. This is evident in the labor practices within the tech industry, where AI development often relies on underpaid and undervalued workers from marginalized communities, perpetuating a cycle of exploitation and exclusion."

Nelson Colón Vargas 

 

AI At What Cost?

AI still needs human intervention to function properly, but this necessary labor is often hidden. For example, ChatGPT uses prompts entered by users to train its models. Since these prompts are also used to train its subscription model, many consider this unpaid labor.

Taylor & Francis recently signed a $10 million deal to provide Microsoft with access to data from approximately 3,000 scholarly journals. Authors in those journals were not consulted or compensated for the use of their articles. Some argue that using scholarly research to train generative AI will result in better AI tools, but authors have expressed concern about how their information will be used, including whether the use by AI tools will negatively impact their citation numbers

In a more extreme case, investigative journalists discovered that OpenAI paid workers in Kenya, Uganda and India only $1-$2 per hour to review data for disturbing, graphic and violent images. In improving their product, the company exposed their underpaid workers to psychologically scarring content. One worker referred to the work as “torture”.


Attribution:

  • University of Texas Libraries; Image sourced on Canva.com
  • Vargas, N. C. (2024). Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups. https://doi.org/10.48550/arxiv.2403.06332

 

Privacy in the Age of AI

There are ongoing privacy concerns and uncertainties about how AI systems harvest personal data from users.Users may not realize that the system is also harvesting information like the user’s IP address and their activity while using the service. This is an important consideration when using AI in an educational context, as some students may not feel comfortable having their personal information tracked and saved.

Additionally, OpenAI may share aggregated personal information with third parties in order to analyze usage of ChatGPT. While this information is only shared in aggregate after being de-identified (i.e. stripped of data that could identify users), users should be aware that they no longer have control of their personal information after it is provided to a system like ChatGPT.

Select Resources on AI and Privacy


Attribution: University of Texas Libraries

""

The increasingly common presence of AI in day-to-day life has heightened the need for transparency in its use: people should be aware of when they are interacting with artificial intelligence, who created the AI they're using, and for what purpose.

Advances in generative AI have made transparency a particular concern. Recent versions of software like ChatGPT can create text in response to a prompt that is indistinguishable from human-produced writing. In academia, this creates concerns over academic integrity in assignments, and is leading to a reevaluation of the types of writing assigned to students. In journalism, some online outlets have already begun publishing articles generated by AI. Given the issues with accuracy in generative AI, a lack of transparency in its use in journalism leads to lower confidence that what we're reading is correct.

Resources


Attribution: Duquesne University- Gumberg Library; Willamette University Libraries; Imaged sourced from Wikicommons.

Reference 909-384-8289 • Circulation 909-384-4448