Why Teachers Are Being Left Out of AI Policy Decisions
67% of schools report offering AI training to staff. 68% of teachers report they have never received any such training. SchoolAI, 2026 The numbers cannot both be true unless the training being reported at the administrative level is not reaching the classroom. That gap is a symptom of a larger structural problem: the people executing AI policy had no hand in writing it.
- District AI policies are frequently written by consultants, vendors, or administrators who have not managed a classroom in years. The resulting policies focus on liability, not instruction.
- Administrators treat AI as a tame problem with a fixed solution. Teachers experience it as a wicked problem requiring continuous judgment. Tame solutions applied to wicked problems fail.
- The 67/68 training paradox: schools reporting training they are not delivering reflects a top-down policy culture that treats teachers as implementers rather than authors.
- Teacher-led policy looks different: it names specific assignment-level expectations, accepts that guidance will need updating, and treats professional development as a prerequisite rather than an afterthought.
Who Is Writing These Policies
The structural problem in school AI policy is not primarily about technology. It is about who is in the room when decisions are made.
District AI guidelines are frequently authored by consultants, technology vendors, or administrators whose last classroom experience was a decade or more ago. Medium, 2026 These are people with real expertise in risk management, legal compliance, and institutional governance. They are not people with current expertise in what it is actually like to manage a class of thirty students with varying abilities, varying home access to technology, varying relationships with academic work, and now varying access to AI tools that change capability every three months.
The policies that result focus heavily on what the institution needs to be protected from: lawsuits, media exposure, academic integrity scandals. They focus very little on what teachers need to actually teach in this environment: clear expectations, supported professional development, assessment frameworks that function in AI conditions, and enough institutional backing to enforce the rules consistently.
Teachers are handed the finished policy document and asked to implement it. When it fails in practice, the failure is attributed to individual teacher inconsistency rather than to a policy design process that never involved the people responsible for execution. One high school teacher described being given an AI-generated rubric by district leadership and asked to have a genuine discussion with colleagues about whether students were meeting its criteria: "I find it insulting to always be given assignments made with little to no effort." r/Teachers, 2026
Tame vs Wicked Problems
Research in educational assessment theory describes a distinction that explains why most AI policies fail before they reach the classroom. Taylor & Francis, 2025
A tame problem has a definable solution. It can be solved by the right combination of rules, tools, and enforcement mechanisms. Once the solution is implemented, the problem is managed. Administrators consistently treat AI in schools as a tame problem: identify the threat, write a ban, deploy detection software, monitor compliance.
A wicked problem is defined by permanent uncertainty, shifting variables, and the need for continuous professional judgment rather than one-time systemic fixes. There is no point at which the problem is solved. The best available response today will need revision in six months because the conditions will have changed. Wicked problems require expertise from the people closest to them, sustained over time, not mandates handed down from a distance.
Teachers experience AI in schools as a wicked problem. The specific tool available to students changes every few months. Whether any particular use of AI in any particular assignment constitutes academic misconduct depends on factors that cannot be captured in a district-wide document: the student's history, the assignment's stated objectives, what the teacher told the class on the day the assignment was given, what the student actually understands of what they submitted.
Applying a tame solution to a wicked problem produces a predictable outcome: a policy that looks adequate on paper and collapses under the first contact with a real classroom. That is what is happening in most schools right now. The policy was written for a simpler version of the problem than the one teachers are actually facing.
The Training Paradox
The 67/68 gap (67% of schools reporting AI training offered, 68% of teachers reporting no training received) is not a statistical anomaly. It is a reliable indicator of how top-down policy cultures operate.
When a district administration needs to report on AI readiness, it counts what it can count: training sessions scheduled, resources posted to an intranet, professional development days allocated. Whether those sessions were attended, whether the content was relevant to classroom practice, whether teachers felt equipped to apply anything from them: none of that is easily counted, so it is not counted.
Teachers who have not received meaningful AI professional development are being asked to enforce AI policies, make judgment calls about AI use on a case-by-case basis, and redesign assessments for an AI era. Without training and without involvement in policy design, they are working without the institutional support the policy assumes they have.
Stanford's Victor Lee makes the developmental scope explicit: "AI literacy is not one thing. We need user, critic, and developer competencies, and these look different for elementary versus secondary classrooms." Stanford University, September 2025 A district-wide policy document written by people who have not been in a secondary classroom recently cannot accommodate that differentiation. Only teachers can.
Does Your School's AI Policy Have Teacher Voice?
Answer five questions about how your school's AI policy was developed and what support teachers received. See how your policy rates on teacher inclusion.
Were classroom teachers involved in drafting the AI policy?
Has the policy been reviewed or updated based on teacher feedback?
Have teachers received AI professional development they found genuinely useful?
Does the policy include specific guidance at the assignment level, not just general principles?
Do teachers have a formal channel to flag when the policy is unworkable in practice?
What Teacher-Led Policy Looks Like
Teacher-led AI policy is not the absence of structure. It is structure built by the people who have to operate within it.
In practice it means collaborative drafting sessions where classroom teachers are not consulted after the fact but are authoring the document from the beginning. It means the policy is written at the assignment level, not just the principle level: not "AI use must support learning goals" but "for this assignment type, these AI uses are permitted and these are not." It means the document includes an explicit acknowledgment that it will need updating as the tools change, with a named process for how teachers can flag problems and propose revisions.
Boston Public Schools gets this structure right. Their AI policy treats every AI use decision as a case-by-case judgment made by the teacher based on specific learning goals rather than a blanket mandate. Playlab, 2026 That is not absence of policy. It is policy that trusts the professional judgment of the people closest to the learning.
Dr. Robin Harwick from Moreland University frames the professional development piece clearly: "The idea that you have to be a tech whiz to use AI in the classroom couldn't be more wrong. Many AI tools are designed with educators in mind. If you can use a search engine, you can use AI." Moreland University, October 2025 The barrier is not technical competence. The barrier is institutions that hand teachers a policy without equipping them to execute it.
For the full overview of what good AI policy contains and how the working districts built it, see What Schools Are Getting Wrong About AI Policy and School AI Policy Examples That Actually Work.
FAQ
District AI policies are frequently drafted by consultants, technology vendors, or administrators who have not managed a classroom in over a decade. The structural barriers in education policy chronically exclude authentic teacher representation. The result is mandates that focus on institutional risk management rather than instructional design.
A tame problem has a definable solution: write a rule, enforce it. Administrators treat AI as a tame problem. Teachers experience AI as a wicked problem: permanent uncertainty, shifting variables, continuous professional judgment required. Tame solutions applied to wicked problems produce policies that look adequate on paper and collapse in the classroom.
67% of schools report offering AI training to staff while 68% of teachers report receiving no such training. The gap reflects a top-down policy culture that counts what is scheduled rather than what is received. Teachers without meaningful professional development are being asked to enforce policies and make judgment calls without institutional support.
Classroom teachers author the document from the beginning, not as post-hoc consultants. The policy specifies what AI use is permitted at the assignment level, not just the principle level. It includes an explicit acknowledgment that guidance will need updating, with a named process for teacher feedback. Boston Public Schools and Peninsula School District are working examples of this approach.
Sources
- SchoolAI. Overcoming Teacher Resistance to AI in the Classroom. 2026. schoolai.com
- Medium. Education Policy Has a Mic Problem. 2026. medium.com
- Taylor & Francis. The Wicked Problem of AI and Assessment. 2025. tandfonline.com
- r/Teachers. Tired of AI in an Educational Setting. 2026. reddit.com
- Stanford University. What Parents Need to Know About AI in the Classroom. September 2025. news.stanford.edu
- Playlab. District and School AI Policies. 2026. learn.playlab.ai
- Moreland University. Debunking the Biggest Myths About Using AI in Education. October 2025. moreland.edu