AI In Education Excerpt 4: Safe AI Usage
This article is the final in a series about AI and education by Andrew Rosston. See previous articles about AI and Plagiarism and AI in Tutoring.
Key concerns about AI have been purposefully left for this final section of the paper. Teachers may have concerns about AI, both in general and in relation to education. Several key issues remain with large language models (LLMs), such as algorithmic bias, hallucination of incorrect information, and limitations in available data. Existing discussion of such topics already has a partial focus on education due to the use of AI by many students.
The use of AI for various purposes, including instruction, tutoring, and research, is affected when there are issues in acquiring accurate information. While comparisons can be made to search engines, digital encyclopedias, and even research which might not be properly verified, each of these sources only contains information that has been written by human hands. An AI can mix two unrelated sources to give incorrect answers to a query, for example, and create other similar errors that would not typically occur in human work.
Thus, teachers and students must understand possible issues with AI-generated information and always verify the sources and information used to construct responses to prompts. While some students might adapt quickly to AI technology, teachers will need to lead in correcting improper usage of these programs, including plagiarism, citation of unverified AI outputs, and using research constrained to sources available to LLMs. Such pitfalls can often be addressed and understood rather simply, despite the complexity of the underlying programs.
Specific to education, issues may arise through specialized usage of AIs, such as in personalized tutoring and AI-supported instruction. Data privacy is particularly relevant for students who are minors. Several jurisdictions, including the EU, have created laws to limit usage of data without users’ consent or to require them to be informed of its usage, but legislation is inconsistent across the US.
The California Consumer Privacy Act (CCPA), for example, made certain uses of personal information “opt-out” while also requiring users to be informed on many uses of their data. (Notably, the requirements do not apply to nonprofits and government agencies as of March 2025, which may leave some holes.) California has also created additional protections for workers and minors, but these and other states’ policies are still changing, may have gaps in their protections, and are not representative of extended experience with this relatively new field. The CCPA specifically requires children under 16 to opt-in to the sale of their personal data, with parents if 13 or younger, though this makes the requirements inconsistent across education levels. Opt-out is the present standard for those above 16, including adults, which may not be widely known in California and jurisdictions with similar policies.
Educators need to ensure that their students’ privacy rights are respected and that they do not allow other forms of abuse, intended or not, as policies remain in their early stages.
Andrew Rosston is a Business Analyst at OnlyMoso USA. He holds a B.A. in Business and Managerial Economics from Oregon State University.