Annual Symposium Fuels Dialogue on Artificial Intelligence at CSU, Beyond
CSU’s recent AI Symposium at the Wolstein Center brought together educators, researchers, students and industry professionals for a one-day event featuring keynotes, panels and hands-on sessions.
The program focused on moving beyond curiosity to the intentional integration of artificial intelligence in higher education, including AI literacy, academic integrity, workforce readiness and responsible adoption.
Insights into the Evolving Role of Artificial Intelligence in Academia
Dr. Liza Long, the symposium’s keynote speaker and director of digital learning and AI for the Idaho State Board of Education, framed artificial intelligence as a pivotal moment for higher education — comparable to the printing press — while emphasizing the need to use the technology thoughtfully to expand human potential.
Long delivered an address titled “The AI-Powered Renaissance: Reclaiming the Universal Scholar,” urging educators and students to rethink the purpose of higher education in an era shaped by artificial intelligence.
“It’s a very transformative technology,” said Long. “It will change the way the world works.”
Drawing on her experience as a teacher, student and policymaker, she also noted tensions in classrooms.
“When I think about what our students are navigating, I’m living it,” she said. “I go from one class where a teacher is super AI-enthusiast … to a class where a professor says, no use of AI ever, ever, for any reason.”
Central to her message was the concept of the “universal scholar” — a return to a more holistic model of education that prioritizes human development over job preparation alone.
“The purpose of education is to create a whole human,” said Long. “It’s to foster human flourishing.”
Long also addressed the uncertainty many educators feel, pointing to a growing gap between usage and confidence, and noted the need for better training and shared standards. Looking ahead, she emphasized that AI literacy is becoming essential in the workforce but warned against overreliance.
“I do worry that anyone who tries to use these tools as an easy button is using them wrong.”
She also described AI as a thought partner.
“That ability to ask interesting questions… this is what I personally love about working with AI,” said Long.
Symposium Engages Attendees with Immersive Sandbox Sessions
During a sandbox session, presenters showcased AI tools as attendees rotated through stations for hands-on exploration. Rachel Rickel, a visiting assistant college lecturer in first-year writing, led “Using, Assessing, and Citing,” focusing on ethical AI use in academic writing and emphasizing transparency.
“Usually the easiest one is just a disclosure statement,” said Rickel. “[Students] should say ‘hey, I’ve used AI in some ways,’” she said, suggesting students explain their use in a cover letter or cite it using MLA or APA.
Rickel also raised concerns about authorship, originality and data privacy.
"When you’re putting your work into this, you are training the algorithms on your work too,” she noted.
She acknowledged uncertainty around balancing AI use with authentic learning and the limits of detection tools.
“We don’t sadly… know if they learned anything,” she said. “That's emotionally weighty on us for sure especially in situations that would be with the health care field.”
She encouraged a balanced approach, treating AI as a tool rather than a substitute for learning.
The symposium featured breakout sessions, including “AI Under the Hood” by Professor Stefan Andrei of the CSU Center for Computing Education and Instruction, who examined how generative AI tools are designed, how they differ and the ethical considerations, power and risks associated with the technology.
“This is the danger. It’s so vast,” said Andrei. “There are millions of people's contributions addressed to whichever Gen-AI tool you are using.”
He noted the vast amount of data and contributions behind each AI-generated response. While discussing its usefulness, he reminded the audience that AI is not human, explaining that it operates without emotion and relies on probability rather than true understanding.
“Student and Instructor Uses and Beliefs about GenAI in Higher Education”
A breakout session examined how students and instructors use and perceive generative artificial intelligence in education. The session was led by Professors Karla Hamlen Mansour, Selma Koc, Anup Kumar and Richard Perloff, along with Allyson Lindsley, an ABD doctoral student in urban education.
Funded by the Wolfgang Family Fund for Innovation, the study was a collaboration between the School of Communication and the School of Education and Counseling.
“The purpose of the study was to lay the groundwork to better understand the dynamics of students and instructors’ use of and perceptions of generative artificial intelligence in educational settings,” said Hamlen Mansour, a professor of research and assessment in the School of Education and Counseling at CSU.
Using focus groups and survey data from 385 students and 305 instructors, researchers examined how often AI is used and how ethical those uses are perceived, finding that students and instructors generally agree on what is ethical.
“Instructors find most of them to be significantly less ethical than students do, however,” Lindsley said, CISP international education advisor at CSU.
The study identified two key ways students view AI: as a “helper” that supports tasks such as research and editing and as a “replacement” that completes assignments or work on their behalf.
Sustainability and AI: Impacts on the Environment
For CSU Sustainability Collective co-founder Mandi Goodsett, the AI Symposium highlighted the environmental costs of artificial intelligence, with her noting that while generative AI tools may seem harmless individually, their collective impact is significant.
“People are concerned about water and energy use that are necessary to create anything with generative AI,” she noted, pointing to the growing demand for data centers.
“These data centers get really hot,” she said. “They need water to cool down. They also have to use municipal water that’s already been treated. Some data centers don’t use water, they use air conditioning, creating a tradeoff between water and energy consumption.”
The scale of use makes the issue urgent.
“It seems like an everyday person just doing a chat probably isn’t using that much water or energy. But when you multiply that by many interactions with millions of people, it starts to become kind of a collective action problem,” said Goodsett.
She also raised concerns about equity and transparency, noting that many AI companies are not fully open about their environmental footprint and that communities already facing water insecurity often bear the burden of increased resource use and infrastructure, but said she does not advocate abandoning AI and instead stresses awareness and accountability.
“It doesn’t mean we should never use generative AI. It just means that we should probably be aware of its environmental impact,” said Goodsett. “When appropriate, we should push back or advocate to our government to make sure that these new tools align with our values.”
Reflecting on the symposium, she underscored uncertainty about AI’s future, noting that conversations about sustainability must keep pace with innovation.
“Nobody has all the answers,” said Goodsett. “Everybody’s trying to find their way through with this very new technology. We’re not totally sure about all the impacts.”