WHAT YOU'LL LEARN
- ✓
Why quick AI answers can reduce learning even when they are correct
- ✓
What the Wharton chess study found, and why it matters for your students
- ✓
What Slow AI looks like in a real classroom
- ✓
Why teacher involvement is the single biggest factor in AI learning outcomes
The question nobody is asking
Most conversations about AI in schools focus on whether students are using it. The more important question is how.
Fast AI means using AI to get quick answers, skip the struggle, and produce polished outputs rapidly. Paste the question, read the answer, move on. It feels efficient. It produces results that look like learning.
Slow AI means using AI to think harder, not less. To get a question answered with another question. To have your reasoning challenged. To receive a hint rather than a solution. It feels harder. It produces actual learning.
The OECD's Andreas Schleicher puts it plainly: "GenAI is clearly not a magic wand. It is capable of magnifying good pedagogy and bad." The same tool, used differently, produces completely different outcomes.
INTERACTIVE
Fast AI or Slow AI?
Read each classroom scenario. Classify how the student is using AI.
Why struggle matters
Productive struggle is the effort of working through something that is genuinely difficult. Not impossible: that is just frustration. Not easy: that is not learning. The zone in between, where a student is challenged but not overwhelmed, is where the most durable learning happens.
Derek Muller, who runs the Veritasium science channel and holds a PhD in physics education, draws the connection directly: "There's a danger in using AI or any technology in education if it causes us to skip the effort." Learning, he argues, requires converting slow deliberate thinking into automatic fast knowledge through practice. Skipping the practice skips the learning.
The generation effect, a well-established finding in cognitive science, confirms this. Information that learners produce themselves is remembered significantly better than information they passively receive. A meta-analysis of 86 studies found retention improvements of 20 to 40% when students generated answers rather than reading them. When AI provides complete answers, students skip the generative process entirely.
This is not an argument against AI in classrooms. It is an argument for understanding what happens cognitively when students use it.
KEY RESEARCH
The Wharton Chess Experiment
Researchers at Wharton School studied 200+ chess students over three months, comparing two groups: one with structured, system-regulated AI guidance, and one with on-demand, self-regulated AI access.
Students with structured guidance improved by approximately 64%. Students with unrestricted access improved by only 30%, less than half the gains. The unrestricted group also completed 24% fewer training games, showing that easy AI access reduced motivation to practice.
Bastani, Poulidis, and Bastani. Wharton School Research Paper, October 2025
More problems, less understanding
A field experiment with Turkish high school math students, published in PNAS in 2025, found a result that every teacher should know.
Students using ChatGPT without guardrails completed 48% more problems correctly during practice sessions. On paper, that looks like improvement. But when those same students were tested on conceptual understanding, they scored 17% lower than students who had learned without AI assistance.
They got more problems right during practice by leaning on AI. They understood the material less as a result.
The critical note: the same study found that a well-designed "GPT Tutor" with appropriate guardrails (one that used Socratic questioning rather than direct answers) largely eliminated the negative effects. The tool is the same. The design is different. The outcome is different.
Cognitive offloading: what it looks like
Cognitive offloading is what happens when a person outsources mental effort to an external tool. A calculator offloads arithmetic. GPS offloads navigation. AI, used as an answer machine, offloads thinking.
A 2025 study published in Societies found a significant negative correlation between frequent AI tool usage and critical thinking abilities across 666 participants. A randomized controlled trial measuring 45-day retention found students who learned with ChatGPT retained substantially less than those using traditional methods, with an effect size of 0.68, approximately an 11 percentage-point gap.
Anthropic's own research found that students asked Claude for direct answers almost half the time, offloading higher-order cognitive functions at exactly the level where learning is supposed to happen.
Harvard's Christopher Dede captures the classroom implication: "If a student uses AI to do the work for them, rather than to do the work with them, there's not going to be much learning."
The most important number in this research
A 2025 meta-analysis by Gu and Yan, covering 19 studies published in SAGE Journals, found an overall positive effect of generative AI on academic performance, with an effect size of 0.683. That is a meaningful result.
The breakdown is what matters. With teacher support: effect size 1.426. Without teacher support: effect size 0.077.
AI without teachers achieves almost nothing. AI with skilled teacher involvement is transformative. The teacher is not a barrier to effective AI use in classrooms. The teacher is the mechanism by which AI becomes effective.
This is the single most important finding for anyone who worries about being replaced by AI. The research says the opposite is true: teacher judgment, presence, and instructional design are what make AI valuable in learning contexts. Without them, AI in classrooms is essentially noise.
What Slow AI actually looks like in practice
The distinction is not about which tool you use. It is about how the interaction is structured. Here is the same student task handled two ways.
FAST AI
"Write me a paragraph explaining photosynthesis for my biology assignment."
AI writes the paragraph. Student pastes it in. Assignment complete. Nothing learned.
SLOW AI
"I have to explain photosynthesis. I think it involves sunlight and carbon dioxide but I'm not clear on what glucose is doing. Ask me questions to check if I understand it correctly."
AI probes the understanding. Student has to explain, correct, and refine. Something is learned.
The Bellwether report from June 2025 frames it well: "AI's role in accelerating or weakening learning largely rests on how well it can turn the dial of productive struggle up or down." Not all friction is beneficial, and not all ease is harmful. The teacher's job is to set the dial deliberately.
Section 3 covers the specific tools and prompts that make Slow AI work in practice.
SOURCES
- Wharton — When Does AI Assistance Undermine Learning?
- Bastani et al. — Generative AI Without Guardrails Can Harm Learning (PNAS, 2025)
- Gu & Yan — Effects of GenAI Interventions: A Meta-Analysis (SAGE, 2025)
- Bellwether — Productive Struggle: AI and Learning (2025)
- OECD — How to effectively use Generative AI in education
- HEPI — Slowing Down AI in Higher Education (2025)
- ScienceDirect — ChatGPT as a cognitive crutch: Randomized controlled trial