Confident Student Analogy—The Core Problem
ChatGPT and other general-purpose AI tools mix truth with fiction, and you can't tell which is which. That's the biggest problem.
Think of it like this: These AI tools are like a really confident student who sometimes gives perfect answers and sometimes makes stuff up. The scary part? Both answers sound equally convincing. You have no way to know if you're getting facts or fabrications.
The Real Problem with SB 13
Texas law requires schools to have accurate evidence about what's actually in books, whether they're deciding to order them, keep them, move them to different grade levels, or remove them entirely. SB 13 doesn't just want quick answers; rather, schools must present clear, documented evidence about book content before a decision.
But general AI tools don’t work that way:
- Sometimes they give you accurate information,
- Sometimes they invent things,
- Sometimes they tell you what you want to hear instead of what's true.
One Texas school district tried this and AI flagged 57 books as potential violations. Some might've been legitimate concerns. Others could've been completely wrong. Imagine sending that flawed evidence to your school board or parent committees for approval decisions. You'd be basing major policy choices on information you can't verify.
Even Legal Experts Get Burned
Real lawyers used ChatGPT for legal research. It invented fake court cases that sounded completely real. Judges fined those lawyers $5,000 and publicly sanctioned them. The AI mixed real legal language with total fabrications, and the lawyers couldn't tell the difference.
If trained legal professionals get fooled, busy school administrators face the same risk.
The Core Issue: Why General AI Can't Be Trusted for Book Review
General-purpose AI tools are designed to please you, not to inform you accurately. For decisions about ordering, relocating, or removing books under SB 13, you need verifiable evidence about actual content. Consumer AI tools aren't purpose-built for this work and can't guarantee accuracy—that puts your entire review process at risk.
What Districts and Staff Should Do Instead
- For compliance, use only publisher official summaries, digital previews, excerpted documentation, or trusted vendor platforms.
- Never use ChatGPT (or similar tools) as the primary evidence source for SB 13 decisions.
- Incorporate this policy directly: “No book review decisions will be made using ChatGPT or similar AI tools as primary evidence.”
Conclusion
Technology can help, but only transparent, evidence-backed review processes truly protect students, educators, and schools under SB 13. AI is like the “confident student”—sometimes right, sometimes totally wrong, always convincing.
Sources
- Texas schools are using AI to screen library books under new state law (Houston Chronicle, 2025)
- Texas schools are using AI to screen library books under new state law (Yahoo News, 2025)
- Pearland ISD board reviews several newly passed public-education bills, staff to map local policy changes (Citizen Portal AI, 2025)
- SB 13: Deep Dive into New School Library Restrictions (Texas AFT, 2025)
- Senate Bill 13 Requirements Related to School Library Materials (Texas Education Agency, 2025)
- 3 Things To Know About SB 13 and New Library Book Requirements (Texas Association of School Boards, 2025)
- Update on the ChatGPT Case: Counsel Who Submitted Fake Cases Are Sanctioned (Seyfarth Shaw LLP, 2022)
- Fake Cases, Real Consequences: Misuse of ChatGPT Leads to Sanctions (Goldberg Segalla, 2023, PDF)
- Johnson v Dunn: Fabricated Legal Authority AI (Natural and Artificial Law, 2025)