Despite artificial intelligence's (AI) impact across campus, uncertainty still surrounds processes involved in preparing research communications. With a 2024-25 Miller Faculty Fellowship awarded through the Center for Excellence in Learning and Teaching, the Graduate College's Center for Communication Excellence (CCE) went looking for more concrete answers.
With its $48,000 award, center staff developed instructional materials and a two-session workshop to help graduate students complete a literature review. They helped novice researchers develop AI core competencies to evaluate outputs, determine responsible use and be better prepared to use AI tools in the future. From summer 2024 to spring 2025, the workshops demonstrated the use of Microsoft Copilot and Elicit, an AI tool designed specifically for the literature review process.
"The research our graduate students produce, once it is out there, it doesn't matter if it has been one year or 20, it is becoming easier to check the ethicality of what they have done," said CCE assistant director Lily Compton. "The record is made public forever, and any questions about the ethicality of the work can negatively impact the student's reputation as well as Iowa State's."
Guiding workshop participants in the use of AI tools, CCE staff emphasized three key areas:
Algorithmic awareness: Understanding that computer programs that make decisions or recommendations influence what we see, do and experience online.
Procedural competence: Knowing how to follow the correct steps to complete a task successfully.
Evaluative judgment: Being able to make a well-informed decision about the quality or value of something, based on clear criteria and reasoning.
Testing AI
Knowing how to write successful prompts and where AI looks for information is important, so workshop participants were first introduced to AI fundamentals terminology. Compton said AI literacy is the foundation of being able to use it effectively.
The team created a framework that can help determine the appropriateness of using AI in scholarly work. It measures effectiveness, efficiency and ethicality around AI use to determine if it is a viable tool for the task. With it, users can consider AI factors such as ability to perform a task, bias and the possibility it's providing false or made-up information.
"By the end, the grad students were able to talk more in depth and confidently about their AI use, which is key, because we want them to have transparency with their advisors and instructors," said Kristin Terrill, senior program specialist for graduate scholarly activities in the college.
Questions for faculty
Graduate College associate dean and CCE director Elena Cotos said the project raised several questions faculty who use AI in their courses should ask, including:
What do I expect from the students using or not using the tool?
Do I want them to be aware of how the algorithm works before they engage?
Do I want them to learn the how-tos so I don't have to reteach them?
Do I want them to be able to make judgments as they see the outputs?
"When using AI, faculty should be thinking about setting expectations for students and how we will evaluate them," she said.
Micro-credential
The Graduate College now offers the Mapping Scholarly Literature with AI micro-credential, designed for graduate students and working professionals. It consists of two in-person workshops and asynchronous self-guided content that introduces learners to basic generative artificial intelligence (GenAI) concepts and demonstrates an application of GenAI for mapping and classifying scholarly research on a topic.
"This also would be a good offering for undergraduates interested in research and those who have to write research papers," Cotos said. "Aligning with ISU's strategic aspiration to foster lifelong learning, the micro-credential provides professional development opportunities for the broader ISU community."