A state education department recently released and circulated an official policy document riddled with false citations because administrators used generative AI to help create it, highlighting concerns over the use of GenAI in educational administration. The incident in Alaska demonstrates the critical challenges that face school administrators nationwide, trying to find the balance between harnessing AI’s power while taking the risks of “hallucinations” –seemingly credible but false information – and more into consideration. What happened in this cautionary tale, and what are the five steps you can take to ensure you don’t fall into the same trap?
State Education Department Creates Cell Phone Policy
The Alaska Beacon reported on October 28 that Education Commissioner Deena Bishop created a proposed policy related to cellphone use in state schools. That policy was the subject of a resolution the department posted on its website before its next state board meeting. It cited studies supporting the department’s position, including:
- A study from the journal Computers in Human Behavior titled, “Banning mobile phones improves student performance: Evidence from a quasi-experiment;”
- A 2019 study from the American Psychological Association; and
- Two other studies on similar subjects from the Journal of Educational Psychology.
There was just one problem: several of these studies never actually existed.
AI Use Leads to False Studies Being Cited
According to the Alaska Beacon, four of the six citations to studies published in scientific journals were false. They were not only never printed in the issues listed, but the titles could not be found after broad online searches. The hyperlinks connected with the studies sent readers to different subjects such as “Sexualized Behaviors on Facebook,” not at all associated with cellphone use at schools.
When the news outlet asked the department to about the false studies, state education officials updated the online document in an attempt to correct the mistakes. A spokesperson said the citations were simply there as filler “placeholders” created during the drafting process until correct information would be inserted.
Bishop told the media that she inputted a draft resolution into a generative AI platform (think ChatGPT or Claude or a similar system) to see if it could help identify additional sources. Bishop said she identified the mistakes before the board meeting and corrected the citations.
However, according to the Alaska Beacon, mistaken references and other remnants of AI hallucinations still existed in the second version. After further inquiries from the media outlet, the department once again updated the online version of the resolution. Bishop told reporters that there was “nothing nefarious” with the mistakes and no harm was done.
What are AI Hallucinations?
AI hallucinations are common problems with GenAI platforms, occurring when systems generate seemingly credible but false information. These outputs often mix accurate information with plausible but incorrect data.
Hallucinations occur due to the way generative AI models are designed. These systems are trained on vast datasets and use probabilistic methods to generate responses based on learned patterns (giving you answers that are “probably” – but not definitely – accurate). However, when faced with gaps in their training data or when prompted for specific information, they may produce content that sounds reasonable but has no basis in reality. This happens because the AI is optimizing for coherence and fluency rather than factual accuracy. The result is a response that appears well-founded but includes details that can be entirely fabricated.
An infamous example of when GenAI hallucinations can cause trouble happened last year when two New York lawyers were formally sanctioned by a court after they turned in a legal brief containing at least six fake legal citations that were created out of thin air by ChatGPT. While the judge in that case said there is nothing “inherently improper” with lawyers using AI “for assistance,” the lawyers crossed the line because they have “a gatekeeping role…to ensure the accuracy of their filings.”
Schools Should Guard Against AI Falsehoods
Schools rely on the integrity of data to support decisions, establish policies, and maintain trust with educators, parents, and the public. When school leaders use AI tools to draft policies or conduct other administrative functions, they risk incorporating false information into official documents. This event in Alaska underscores that AI-generated content can lead to potential misinformation – without clear protocols and robust oversight.
5 Best Practices for Schools
To avoid similar pitfalls, school administrators should consider adopting the following practices:
- Develop an AI Usage Policy: Ensure you have a clear, formal policy outlining when and how AI tools can be used for administrative tasks. This policy should require your employees to disclose when they use AI to create any new document.
- Require Human Verification: Implement mandatory review procedures where any document that was created with the help of AI is subjected to thorough fact-checking by qualified personnel before dissemination. Involving multiple reviewers who double-check all links and other statements of fact ensures that fabricated content is caught and corrected. There is no replacement for sound human judgment.
- Train Staff on AI Literacy: Increase awareness and training for your school staff on how generative AI works, its benefits, and its limitations. Emphasize that AI is not a replacement for search engines – even as its search features become more robust – and cannot be fully trusted for sourcing accurate information.
- Incorporate Trusted Sources: Reinforce a preference for human-curated, peer-reviewed sources to support claims made in your official communications.
- Monitor Technological Developments: Stay updated on advancements in AI and related best practices to adapt as tools evolve. This includes awareness of new checks or tools that help verify AI-generated citations.
Conclusion
The Alaska incident is a cautionary tale about the use of generative AI in school administration. While AI can streamline certain tasks, it carries risks that must be actively managed. By implementing clear policies, promoting AI literacy, and reinforcing verification processes, you can harness the benefits of AI while safeguarding the accuracy and credibility of your work.