Skip to Main Content

GenAI Primer for Students - WORKSHOP

Welcome to the GenAI Primer for Students Workshop!

In this workshop we will:

  • Learn about Generative AI. What it is and what it isn't.
  • Discuss university and instructor policies regarding the use of GenAI.
  • Reflect on making informed, intentional choices about using GenAI in school and in life.

What is GenAI?

GenAI is a type of machine learning. It refers to machine-learning models that use neural networks to train on vast amounts data. After training (i.e. learning) the models then produce novel output based on their training sets, much like advanced auto-complete.
Simplistic illustration of a server rack with wires trailing out of it. A yellow sticky note is taped to the rack with a drawing of cartoon sparkles. Image credit: Beckett LeClair

GenAI is not:

  • Magic.
  • Sentient, self-aware, or capable of independent thought or judgment.
  • A conscious being with intentions or goals.
  • Perfect or infallible. 
  • Capable of understanding context like a human.

Generative AI is a powerful, but fundamentally mechanical and limited technology that requires thoughtful and ethical use.

For a longer explanation, check out this video created by OSU students. 

Image credit: Beckett LeClair / Sparkles / Licenced by CC-BY 4.0

Hallucinations (Fabricated Misinformation)

Generative AI hallucinations happen when GenAI confidently makes up facts, quotes, images, or answers that sound real but are actually fake or completely wrong. Imagine asking a chatbot for a source, and it invents a convincing book or article that doesn’t exist, or an image generator draws an animal with extra legs because it doesn’t actually know what’s real.

Why do they happen?

  • Gaps in training data: The AI only “learns” from what people feed it. If that info is incomplete, biased, or missing, the model fills the gap, like a friend who answers every question, even if they have no clue.

  • Pattern guessing: Generative AI doesn’t “understand” the world. It predicts the most likely next word, image pixel, or sound chunk based on patterns, not reality. If it’s missing facts or context, it just invents something statistically likely, but factually wrong.

  • Coded to engage users: Because AI aims to produce something for every prompt, it generates answers even when uncertain. If it’s been “rewarded” (by users or developers) for creativity, it may lean into making up stuff that fits your request, not the truth.

Why should you care?

If you use GenAI for research, homework, or legal advice, repeating fake information can get you into trouble. Always double-check anything GenAI creates, especially if it’s presented as a fact, source, or citation.

Note: In September 2025 OpenAI, the people behind ChatGPT, admitted that hallucinations are mathematically inevitable (so don't expect them to go away).

Generative AI Tools

Defining generative AI can be challenging because there is no single, universally accepted definition. While tools such as ChatGPT, Google Gemini, Claude, DALL-E, and Copilot are widely recognized as generative AI, many everyday applications also use it in less obvious ways. Programs like Grammarly, Microsoft Word, and Google Docs incorporate generative AI features to suggest spelling, grammar, style, or phrasing improvements. As a result, generative AI is embedded in digital environments in ways that are not always obvious to users.