Blog

Developing Ethical and Practical AI Guidelines for Schools: Insights from an Educator and a Student

Apr 22, 2025

Developing Ethical and Practical AI Guidelines for Schools: Insights from an Educator and a Student

Artificial Intelligence is rapidly reshaping the educational landscape, offering opportunities for personalized learning, efficiency, and accessibility. However, alongside these benefits come challenges, particularly in defining ethical and practical guidelines for AI use in schools. To understand how schools are navigating this transformation, we spoke with Lauren Ludwig, a technology integrationist leading AI policy development at her school, and Soham Bafana, a student at Dwight-Englewood School actively involved in AI education initiatives. Their experiences highlight the importance of collaboration, adaptability, and clear communication in shaping AI guidelines that serve educators, students and schools.

The Development Process: A Collaborative Approach

Lauren, who spearheaded AI guideline development at her school, took the initiative as part of her role in technology integration. Recognizing the growing influence of generative AI, she began by hosting sessions to introduce her colleagues to AI concepts. These discussions led to the formation of a working group consisting of educators from various disciplines, as well as students.

"We wanted to ensure that our guidelines were rooted in what we were already teaching," Lauren explained. Drawing inspiration from AI guidelines developed in North Carolina, she and her colleagues conducted extensive research before engaging stakeholders in structured discussions. Breakout sessions allowed participants to examine AI from different angles, ultimately leading to tailored guidelines for their school community. They developed separate guidelines; one for students which outlined responsible use, and another which addressed broader ethical considerations.

Soham, on the other hand, approached AI guidelines from a student’s perspective. He became interested in AI during his freshman year at  Dwight-Englewood School, learning machine learning for neuroscience. When ChatGPT launched during his sophomore year, he quickly saw both its potential and challenges in classrooms. He conducted AI workshops for students and teachers, eventually contributing to the development of AI guidelines at his school.

Rather than enforcing a rigid policy, Soham advocated for flexibility. "You shouldn’t have a one-size-fits-all solution," he said. Instead, he believes educators should be empowered to determine AI usage on an assessment-by-assessment basis, leaving room for adaptation across different subjects and assignments.

Key Components and Ethical Considerations

Both Lauren and Soham emphasized the importance of ethical AI use, particularly in ensuring equity and fairness.

Lauren highlighted a key issue: "If we're going to say no AI, then are we also saying no use of tutors, no use of parents, no use of older siblings? Because in some ways, if you're using AI appropriately, that's exactly how it can be used." For her, AI has the potential to level the playing field by making educational support more accessible. However, concerns about data privacy and algorithmic bias also need to be addressed, particularly given the diverse student population at her school.

Similarly, Soham pointed out another critical challenge: the ambiguity surrounding AI use in assignments. "If a teacher says AI isn’t allowed, what does that actually mean?" He stressed the need for clear communication between teachers and students to avoid confusion and ensure fairness in academic integrity policies.

Implementing AI Guidelines: From Theory to Practice

Once the guidelines at Lauren’s school were developed, the next challenge was implementation. Lauren and her team initially created a comprehensive 22-page document, however early feedback made it clear that something more concise was needed. They condensed the information into one-page summaries, one for teachers and another for students, to make the guidelines more digestible and accessible.

To introduce the guidelines, Lauren and her colleagues hosted a three-hour workshop featuring a 45-minute overview followed by hands-on activities. "Writing a good AI prompt takes skill and it should take longer than 60 seconds," Lauren noted, emphasizing the need for professional development in AI literacy. By focusing on practical applications, they ensured that educators felt confident integrating AI into their teaching while maintaining academic integrity.

Additionally, as a Senior at Dwight-Englewood School, Soham took a student-led approach to AI policy implementation. Recognizing the importance of student voices in shaping AI guidelines, he delivered presentations to the entire English department, then expanded his efforts to include other faculty and staff. His goal was to provide insight into how students were actually using AI; what real-world applications looked like, how widespread the technology was, and what challenges needed to be addressed in the classroom. He was able to help educators make more informed decisions about AI policy by sharing a student perspective. .

Beyond his own school, Soham is the Co-Founder of Students for Innovation, an organization that empowers students to advocate for education and AI innovation. Their mission was to take the resources and lessons they had created and share them with other students, empowering them to drive similar changes in their schools. "My original goal was not to advocate for or against AI," Soham explained, "but simply to provide knowledge of what the technology is actually doing and dispel common misconceptions that many teachers have." His work, particularly with English teachers, underscored the need for clear and well-informed guidelines that support educators and students in navigating AI’s role in the classroom.

Challenges and the Path Forward

Despite progress, both Lauren and Soham acknowledged challenges. One major hurdle was the lack of familiarity and fear of AI among some educators as well as the growing equity gap in AI education remains a challenge. Many schools lacked the resources or training needed to help teachers integrate AI into their classrooms. 

As Soham suggested, "A teacher not wanting to learn about AI may not be an excuse, but a teacher not having access to professional development is an equity problem." Ensuring that all educators have free or affordable professional development opportunities is crucial in preventing AI from widening existing educational disparities. To support this initiative, Soham is working on organizing a free virtual summit this May where students will present to teachers, advocating for AI in education and innovation. For registration details, visit Students for Innovation’s Summit page and check out their video

Moreover, another pressing concern is the reliability of AI detection tools. "A 2-5% error rate on an AI checker might not seem like much, but if that means 50 students aren’t getting their diploma because of a false positive, that’s a huge issue," Soham pointed out. Misuse of AI detection tools can have real and damaging consequences, thus this highlights the need for balanced and well-informed guidelines.

Looking Ahead: Adapting to an Evolving AI Landscape

Both Lauren and Soham agree that AI guidelines must remain dynamic. "This is ever-evolving, and no document can be fully comprehensive," Lauren acknowledged. By keeping the guidelines broad enough to be adaptable, they hope to ensure their longevity and relevance.

For schools looking to develop their own AI guidelines, their advice is clear: involve a diverse group of stakeholders, including skeptics. "You need to have the ‘anti-AI’ voices in the conversation," Lauren stressed. Giving teachers time to experiment with AI tools and fostering open dialogue between students and educators is also crucial. "Students should feel comfortable asking, ‘Is this considered cheating?’ without fear," she added.

As AI continues to reshape education, the goal isn’t just to regulate its use, it’s to empower educators and students to navigate this new landscape thoughtfully, insightfully and ethically. As Soham puts it, "Rather than simply giving teachers a list of AI rules, we should empower them to take ownership of AI in their classrooms. By helping educators understand the technology and its applications, we can give them the agency to make informed decisions about how to use it in their teaching."

Key Recommendations for Educators Developing AI Guidelines

  1. Adopt a Collaborative Approach – Involve educators, students, and key stakeholders in discussions to ensure guidelines reflect diverse perspectives.

  2. Prioritize Ethical Considerations – Address issues like data privacy, algorithmic bias, and equitable access to AI tools.

  3. Encourage Flexibility – Avoid rigid and one-size-fits-all guidelines, instead allow teachers to determine AI use based on context.

  4. Communicate Clearly – Ensure that students and teachers have a shared understanding of AI guidelines to prevent confusion.

  5. Offer Professional Development – Provide accessible and affordable training to help educators integrate AI effectively, seamlessly and responsibly.

  6. Simplify Implementation – Condense guidelines into digestible formats and provide practical examples.

  7. Stay Adaptable – Continuously revisit and update AI guidelines to reflect on new developments and challenges.

Thus, by following these principles, schools can develop thoughtful AI guidelines that support students and educators in navigating this evolving technology.

 

Leave a Comment

Your email address will not be published.