
This guide is a companion to the workshop, GenAI Primer for Students, delivered by OSU librarian Laurie Bridges.
In this workshop we will:
GenAI is a type of machine learning. It refers to machine-learning models that use neural networks to train on vast amounts data. After training (i.e. learning) the models then produce novel output based on their training sets, much like advanced auto-complete.

GenAI is not:
Generative AI is a powerful, but fundamentally mechanical and limited technology that requires thoughtful and ethical use.
For a longer explanation, check out this video created by OSU students.
Image credit: Beckett LeClair / Sparkles / Licenced by CC-BY 4.0
Generative AI hallucinations happen when GenAI confidently makes up facts, quotes, images, or answers that sound real but are actually fake or completely wrong. Imagine asking a chatbot for a source, and it invents a convincing book or article that doesn’t exist, or an image generator draws an animal with extra legs because it doesn’t actually know what’s real.
Why do they happen?
Gaps in training data: The AI only “learns” from what people feed it. If that info is incomplete, biased, or missing, the model fills the gap, like a friend who answers every question, even if they have no clue.
Pattern guessing: Generative AI doesn’t “understand” the world. It predicts the most likely next word, image pixel, or sound chunk based on patterns, not reality. If it’s missing facts or context, it just invents something statistically likely, but factually wrong.
Coded to engage users: Because AI aims to produce something for every prompt, it generates answers even when uncertain. If it’s been “rewarded” (by users or developers) for creativity, it may lean into making up stuff that fits your request, not the truth.
Why should you care?
If you use GenAI for research, homework, or legal advice, repeating fake information can get you into trouble. Always double-check anything GenAI creates, especially if it’s presented as a fact, source, or citation.
Note: In September 2025 OpenAI, the people behind ChatGPT, admitted that hallucinations are mathematically inevitable (so don't expect them to go away).
Defining generative AI can be challenging because there is no single, universally accepted definition. While tools such as ChatGPT, Google Gemini, Claude, DALL-E, and Copilot are widely recognized as generative AI, many everyday applications also use it in less obvious ways. Programs like Grammarly, Microsoft Word, and Google Docs incorporate generative AI features to suggest spelling, grammar, style, or phrasing improvements. As a result, generative AI is embedded in digital environments in ways that are not always obvious to users.