ChatGPT and other general-purpose AI tools mix truth with fiction, and you can't tell which is which. That's the biggest problem.
Think of it like this: These AI tools are like a really confident student who sometimes gives perfect answers and sometimes makes stuff up. The scary part? Both answers sound equally convincing. You have no way to know if you're getting facts or fabrications.
Texas law requires schools to have accurate evidence about what's actually in books, whether they're deciding to order them, keep them, move them to different grade levels, or remove them entirely. SB 13 doesn't just want quick answers; rather, schools must present clear, documented evidence about book content before a decision.
But general AI tools don’t work that way:
One Texas school district tried this and AI flagged 57 books as potential violations. Some might've been legitimate concerns. Others could've been completely wrong. Imagine sending that flawed evidence to your school board or parent committees for approval decisions. You'd be basing major policy choices on information you can't verify.
Real lawyers used ChatGPT for legal research. It invented fake court cases that sounded completely real. Judges fined those lawyers $5,000 and publicly sanctioned them. The AI mixed real legal language with total fabrications, and the lawyers couldn't tell the difference.
If trained legal professionals get fooled, busy school administrators face the same risk.
General-purpose AI tools are designed to please you, not to inform you accurately. For decisions about ordering, relocating, or removing books under SB 13, you need verifiable evidence about actual content. Consumer AI tools aren't purpose-built for this work and can't guarantee accuracy—that puts your entire review process at risk.
Technology can help, but only transparent, evidence-backed review processes truly protect students, educators, and schools under SB 13. AI is like the “confident student”—sometimes right, sometimes totally wrong, always convincing.