Navigating AI Policies: Impact on Student Mental Health at EDUCAUSE ’25

Admin

Navigating AI Policies: Impact on Student Mental Health at EDUCAUSE ’25

NASHVILLE, Tenn. — The rise of artificial intelligence (AI) in education is causing a stir. Many schools are banning AI usage and employing detection tools, leaving students anxious about being wrongly accused of using AI.

At the recent EDUCAUSE conference, Ashley Dockens from Lamar University and Cindy Blackwell from Texas A&M discussed this issue. They highlighted how unreasonable expectations placed on students can lead to misunderstandings about AI use. “We expect them to always make the right choice, but we all stumble sometimes,” Dockens pointed out. “If they can’t learn from their mistakes in a safe environment, where can they?”

Students often struggle to determine when AI help is acceptable. Many are confused by inconsistent policies across different classes and assignments. Dockens shared her own experience: her dissertation was flagged as 98% AI-generated, even though it was written before AI tools existed.

While some institutions have banned AI detection software, many rely on tools like Turnitin, which reportedly has over 16,000 users globally. Concerns about AI cheating are heightened, and students face serious consequences for misunderstandings. Some have even lost their visas or found themselves in national headlines over these accusations.

Dockens noted that traditional college students may not fully understand the implications of using AI due to underdeveloped judgment and impulse control. Pressures from school and work make it tempting for them to seek quick help. They often rationalize their choices, thinking things like “everyone is doing it” or “I’m too busy.” This insight can help shape better guidelines around AI use, focusing on supportive approaches over punitive ones.

So, what can be done? Clear policies are essential. Dockens and Blackwell suggest a tiered approach, where broader institutional policies set the stage, and more specific rules address different disciplines. Involving students in these discussions can add valuable perspectives. Transparency is also key. If teachers use AI but restrict students from doing so, students may feel betrayed.

Instructors can approach AI with an open mind, adjusting their teaching to encourage active participation. For instance, Blackwell recommends having students reflect on their work before they see their grades. This practice helps foster self-assessment and deeper engagement.

Additionally, students could keep logs of their research processes, which would help clarify their thinking and protect against accusations of improper AI use. By prioritizing a safe learning environment, schools can shift the focus from punishing students to offering opportunities for growth, such as workshops on AI ethics or chances to redo assignments for partial credit.

Ultimately, it’s vital to remember the human aspect of education. As Dockens puts it, “Empathy is needed.” Understanding students’ struggles with AI can create a more supportive learning atmosphere.

As technology continues to evolve, schools will face new challenges and opportunities. Staying adaptable and empathetic will be crucial for helping students navigate this landscape.

For more on educational policies and AI tools, you can check Turnitin and related resources.



Source link