Evaluate GenAI output

When AI gets it wrong

AI can be wrong in multiple ways: It can give the wrong answer, omit information by mistake, mix truth and fiction, and even make up completely fake people, events, and articles. Below are two examples. 

Wrong or misleading answer

Sometimes an AI will confidently return an incorrect answer. This could be a factual error, or – like in the example below – inadvertently omitted information. Vanuatu and Vatican City are both real countries, but these are not all the countries that start with the letter V.

Screenshot of a ChatGPT conversation, as follows: "User: Name all the countries that start with V. ChatGPT: There are two countries whose names start with the letter V: Vanuatu and Vatican City. Please note that this list is accurate as of my knowledge cutoff in September 2021, and there may have been changes or additions since then. User: What about Venezuela and Vietnam? ChatGPT: Apologies for the oversight. You are absolutely correct. Here is the corrected list of countries that start with the letter "V": Vanuatu, Vatican City, Venezuela, Vietnam. Thank you for pointing out the missing entries, and I apologize for any confusion caused."

False information

Sometimes, rather than simply being wrong, an AI will invent information that does not exist. Some people call this a “hallucination,” or, when the invented information is a citation, a “ghost citation.”

Screenshot of a citation list generated by ChatGPT, as follows: "User: What scholarly sources were used to generate that description? ChatGPT: As an Al lanquage model, I do not have the ability to conduct research or cite sources independently. However, I can provide a list of scholarly sources related to the symbolism of the green light in The Great Gatsby: 

1. Bruccoli, Matthew J. "F. Scott Fitzgerald's American Dream." In Critical Essays on F. Scott Fitzgerald's The Great Gatsby, edited by Scott Donaldson, 83-95. G.K. Hall & Co., 1984. 
2. Johnson, Claudia Durst. "Seeing and Believing: The Great Gatsby." In Understanding The Great Gatsby: A Student Casebook to lssues, Sources, and Historical Documents, edited by Dalton Gross and MaryJean Gross, 71-87. Greenwood Press, 1998.
3. Lee, A. Robert. "Gatsby's Green Light: The Emanation of a Dream." In Twentieth Century Literature 1, no. 1 (1955): 1-6. 
4. Lydenberg, Robin. "The Symbolic Landscape of The Great Gatsby." In F. Scott Fitzgerald: New Perspectives, edited by Jackson R. Bryer, 153-70. University of Georgia Press, 1982.

These sources and others like them offer detailed analysis and interpretation of the green light as a central symbol in The Great Gatsby."

These are trickier to catch, because often these inaccuracies contain a mix of real and fake information. In the screenshot above, none of the listed sources on The Great Gatsby exist. While the authors are all real people, and the collections are all real books, none of the articles listed here are actually real.

When ChatGPT gives a URL for a source, it often makes up a fake URL, or uses a real URL that leads to something completely different. It’s key to double-check the answers AI gives you with a human-created source. You can find out how to fact-check AI text in sections and video at the bottom of this page.

How to fact-check AI output

You should treat AI outputs like fact-checking a text that provides no sources, like some online articles or social media posts. You’ll determine its credibility by looking to outside, human-created sources, by using the lateral reading strategy. You can think of this strategy as “tabbed reading”, moving laterally away from the AI information to sources in other tabs rather than just proceeding “vertically” down the page based on the AI prompt alone.

image.png

Watch: How to read laterally Links to an external site.

Click the plus signs on the image below to learn how to fact-check something you got from ChatGPT or a similar tool.

🚀 Now that you know how to assess the accuracy of AI-generated work,
➡️ Select "Next" below or click here to test your knowledge!