Skip to Main Content

Artificial Intelligence Now: ChatGPT + AI Literacy Toolbox: AI Concerns

Find resources on artificial intelligence, ChatGPT, writing with AI assistance, AI academic productivity tools, plagiarism, prompt engineering, GPT misinformation and hallucinations, AI image tools, AI literacy, and discussions related to AI ethics.

@ai_risks

Rebecca Sweetman: Some Harm Considerations of LLMs

Ethical Concerns & ChatGPT

Artificial intelligence can be a helpful tool in the information literacy toolbox for topic brainstorming, writing refinement, and cross-disciplinary perspectives. Generative AI could also give first-generation college students a chance to compete on an equal footing in culturally biased environments, as they are often more resourceful and creative in their approach to academics and campus life. However, there are known issues with ChatGPT’s accuracy and reliability. It can appear to researchers as an authority on any given subject by providing citations to sources that do not exist or are not relevant to the topic being discussed. This can make it difficult for researchers to find accurate information and can lead to them making incorrect conclusions. Unfortunately, ChatGPT, like a large percentage of Artificial Intelligence, also has well-documented issues of bias and misinformation. This is because AI systems are trained on data that is often biased, and this bias can be reflected in the output of the system. For example, if an AI system is trained on a dataset of news articles that are mostly from one political perspective, the system may be more likely to generate text that reflects that perspective. Additionally, AI systems can be easily manipulated to generate false or misleading information. For example, an attacker could create a fake news article and then use an AI system to generate text that supports the article. This text could then be spread online, making it difficult to distinguish between real and fake news. In some cases, an overcorrection to combat bias has led to these communities being left out of the conversation completely. It is important to be aware of the potential for bias and misinformation when using AI systems and to take steps to mitigate these risks. Librarians at FIU are hoping to develop tools to promote AI literacy and the evaluation of generative AI.