Cal State Partners with OpenAI: Why Some Students and Faculty Are Choosing Not to Use It

Admin

Cal State Partners with OpenAI: Why Some Students and Faculty Are Choosing Not to Use It

In 2025, California State University invested $17 million in OpenAI to provide its campuses with unlimited access to a specialized version of ChatGPT. The aim was to help students learn about and integrate artificial intelligence (AI) into their education. However, this announcement surprised many faculty and students, who felt unprepared to navigate the ethical implications of using such technology.

Concerns about academic integrity quickly arose. To combat potential cheating, some professors reverted to traditional testing methods while others tried to incorporate ChatGPT into their curricula. This shift has left students feeling confused about the appropriate use of AI in their studies.

A recent survey found that over half of the faculty believed AI negatively affected teaching, while a significant number of students felt they were not adequately taught how to use AI responsibly. Interestingly, 64% of respondents acknowledged that AI had positively impacted their learning experiences.

As the contract with OpenAI nears its end in July, discussions are intensifying. A petition at San Francisco State University has called for an end to the partnership, raising concerns about the direction of AI use in education. Assemblymember Mike Fong has introduced legislation aimed at ensuring that colleges provide necessary training on AI technologies. Fong has pointed out a pressing issue: without consistent training, concerns around data privacy and academic fairness remain unanswered.

Students like Katie Karroum, who is involved with the California State Student Association, expressed frustration over the lack of communication regarding the AI deal. Many students feel left out of discussions that directly affect their education. Karroum stated that more inclusion in decision-making is crucial, adding that students have valuable insights into using AI effectively.

Meanwhile, faculty faces the challenge of creating clear policies. Some professors have integrated AI into lessons, guiding students on how to use tools like ChatGPT responsibly. Others, however, feel that AI usage should be carefully monitored to prevent misuse that could hinder learning. Ryan Jenkins, a philosophy professor, allows AI as a resource but emphasizes that critical thinking should remain central to students’ academic experiences.

Student feedback indicates a divide in how professors view AI. Some embrace it, while others caution against its use, particularly in assignments. For instance, Emily Callahan, dean of students at Cal State Bakersfield, has seen a rise in reports of improper AI use, reinforcing the need for consistent guidelines across the system.

Support systems are also evolving. Initiatives like the AI Writer Toolbox at San Jose State University help students learn to work with AI responsibly. This toolkit offers guidance on ethical AI use and helps students disclose their AI interactions in assignments.

As AI technology continues to advance, educational institutions must find ways to integrate it into learning without compromising academic integrity. The conversation around AI is ongoing, and with student engagement at the forefront, a balanced approach may lead to more effective learning environments in the digital age.

For more information on California’s AI initiatives, visit the AI Commons website.



Source link

Artificial Intelligence,California State University,Higher Education,Technology