Prompts & Temperature
+Understand how prompt phrasing influences creativity and consistency.
+ Not completed +From 3a4afbb433acd7a7d80877bed955215082484a69 Mon Sep 17 00:00:00 2001 From: JimWentworth <94124230+JimWentworth@users.noreply.github.com> Date: Wed, 7 Jan 2026 16:14:37 -0600 Subject: [PATCH 1/2] Refine AI essentials navigation --- .../activity-hallucinations-rag.html | 149 ++++ .../activity-prompts-temperature.html | 183 +++++ ai-essentials/activity-tokens-context.html | 156 +++++ ai-essentials/index.html | 108 +++ ai-essentials/shared.js | 201 ++++++ ai-essentials/styles.css | 646 ++++++++++++++++++ ai-essentials/terminology.html | 549 +++++++++++++++ 7 files changed, 1992 insertions(+) create mode 100644 ai-essentials/activity-hallucinations-rag.html create mode 100644 ai-essentials/activity-prompts-temperature.html create mode 100644 ai-essentials/activity-tokens-context.html create mode 100644 ai-essentials/index.html create mode 100644 ai-essentials/shared.js create mode 100644 ai-essentials/styles.css create mode 100644 ai-essentials/terminology.html diff --git a/ai-essentials/activity-hallucinations-rag.html b/ai-essentials/activity-hallucinations-rag.html new file mode 100644 index 0000000..12256a1 --- /dev/null +++ b/ai-essentials/activity-hallucinations-rag.html @@ -0,0 +1,149 @@ + + +
+ + +AI Essentials for Faculty
+Hallucinations: When AI models confidently generate false or fabricated information. This happens because LLMs predict the next word based on patterns in their training data—they don't access a truth database or verify facts.
+ +RAG (Retrieval-Augmented Generation): A two-step process that reduces hallucinations by combining search with generation. First, the system retrieves relevant information from external sources. Second, it provides this retrieved information to the LLM as context when generating responses.
+ +Key Difference: ChatGPT relies primarily on training patterns from its knowledge cutoff date, making it prone to hallucinations about recent events or obscure topics. Perplexity searches the web first and provides cited answers, reducing hallucinations. NotebookLM only uses your uploaded documents, grounding responses in your sources.
+ + + + +Test ChatGPT's tendency to hallucinate by asking about something specific and recent:
+Observe: ChatGPT may confidently provide details about a study that doesn't exist, or conflate multiple studies.
+Now try the identical query in Perplexity, which uses RAG by searching the web first.
+Observe: Perplexity searches current sources and provides citations. If it can't find the study, it will say so.
+Upload 3-5 research papers or course materials to NotebookLM, then ask questions about them.
+Click each card to see how RAG changes AI behavior:
+ +ChatGPT (Standard Mode)
+Click to see details →
+Perplexity / NotebookLM
+Click to see details →
+Click the cards above to flip them and see more details
+Have you finished this activity?
+ +AI Essentials for Faculty
+Prompt: The input text or instructions given to an AI model to guide its response or behavior. The quality and specificity of your prompt directly affects the quality of the AI's output.
+ +Temperature: A parameter that controls randomness in text generation. Lower values (0.0-0.3) make output focused and deterministic—the model will choose the most probable next words, resulting in consistent, predictable responses. Higher values (0.7-1.0) increase creativity and variability—the model considers less probable options, producing more diverse and surprising outputs.
+ +Important Note: In most AI chat interfaces like ChatGPT and Copilot, you don't directly control temperature—it's preset by the system. However, you can influence how the model behaves through your prompts. The model adjusts its randomness based on what you're asking for.
+ + + +Start with a prompt that requests factual, straightforward information:
+Click "regenerate" 3 times and observe: The core facts stay the same, but phrasing varies slightly. This is low-temperature behavior—consistent information with minor stylistic changes.
+Regenerate this 3 times and observe: You get very different opening lines—some formal, some dramatic, completely different framing.
+Try these variations and notice how outputs differ:
+vs.
+The first prompt triggers lower temperature behavior. The second triggers higher temperature behavior.
+Regenerate several times. Because this prompt is ambiguous, you'll see high variation.
+Best for: Technical documentation, grading rubrics, factual summaries, instructional content
+Best for: Brainstorming, generating multiple versions, creative writing, exploring possibilities
+Click each card to compare:
+ +"Tell me about AI"
+Click to see details →
+"Explain transformers in NLP for grad students"
+Click to see details →
+Click the cards above to flip them and see more details
+Have you finished this activity?
+ +AI Essentials for Faculty
+Tokens: Basic units of text that AI models process. A typical English word is 1-2 tokens. Numbers, punctuation, and spaces also count as tokens. This matters because AI models have limits on how many tokens they can process at once.
+ +Context Window: The maximum amount of text (measured in tokens) that an AI model can consider at one time, including both your input and the model's output. Once you exceed the context window, the model starts "forgetting" earlier parts of the conversation.
+ +Key Insight: Different AI tools have dramatically different context windows. ChatGPT handles ~128,000 tokens (~96,000 words or ~300 pages). Claude handles ~200,000 tokens (~150,000 words or ~500 pages).
+ + + + +Upload or paste a short research paper, article, or document into ChatGPT or Claude.
+Observe: The AI should handle this easily and reference specific sections accurately.
+Upload a longer paper, book chapter, or report.
+Observe: Does the AI maintain accuracy across different sections? Can it connect ideas from the beginning and end?
+Upload a dissertation chapter, book, or comprehensive report to Claude.
+Observe: Even with a large context window, very long documents may lead to the AI emphasizing recent sections over earlier ones.
+Have an extended conversation with multiple back-and-forth exchanges (10+ turns). Then ask the AI to reference something from your first message.
+Example: Claude's 200K token window can handle an entire PhD dissertation (~500 pages) in one conversation, while ChatGPT's 128K window might require breaking it into chunks.
+Even with large context windows, breaking complex analysis into focused questions often yields better results than asking the AI to process everything at once. Instead of "Analyze this 300-page book," try:
+This approach works better because it directs the AI's attention to specific sections rather than trying to "see" the entire document at once.
+128,000 tokens
+~96,000 words
+~300 pages
+Best for: Most research papers, book chapters, typical academic documents
+200,000 tokens
+~150,000 words
+~500 pages
+Best for: Entire books, dissertations, large codebases, comprehensive literature reviews
++ ⚠️ Important: Context window includes BOTH input and output. If you upload a 100,000 token document, the AI has only 28,000 tokens left (in ChatGPT) or 100,000 tokens left (in Claude) for its response and any follow-up conversation. +
+Have you finished this activity?
+ +Center for Innovation in Teaching & Learning
++ Explore the activities below to learn core AI concepts through practical application with real tools. Each activity is + designed for self-paced learning and saves your progress as you go. +
+Understand how prompt phrasing influences creativity and consistency.
+ Not completed +Compare AI answers with and without retrieval to spot hallucinations.
+ Not completed +Learn how context limits impact long documents and conversations.
+ Not completed +Flip through quick definitions for core concepts, tools, and models.
++ Contact citl-info@illinois.edu for additional support and guidance. +
+University of Illinois
+Click any card to reveal its definition
+