This article is part 3 of 3 articles on the CLT Symposium session on Exploring Dimensions of Faculty Use of AI in a Liberal Arts Context. Part 1 was focused on using AI to design syllabi (link). Part 2 was on Giving Feedback (link). This part focuses on using AI to create assessments. The article reports on student-faculty-staff interactions around a fictional scenario. We share the scenario and the summary of reflections, then provide some recommended resources for faculty. We also wrap up the 3-part series by some general reflections and ways forward.
Scenario 3: Designing Assessments
Description
A professor is teaching a course for the first time and has very little experience creating quizzes. They enter the course topic onto an AI tool and ask it to create 20 quiz questions at varying levels of difficulty and then takes the output as is and prints it and distributes it in class.
Findings
Consequences
- Positive: it can save time and can be easy to grade. It can help a professor get started on a daunting task. It could provide good questions if the references are available for the AI tool. The quiz should be relevant and related to the course content. Self assessment could be possible. It is a good option for creating a question bank for multi-section courses where a large number of questions is needed.
- Negative: AI assessments may give generic output; using AI lacks critical thinking (by the instructor themselves) and the instructor ends up not developing professionally. AI tools take time to train before they can save time; students can get answers from the AI tools easily. Results of the quiz may not be very accurate compared to what was covered in class. AI results may not get the idea of different difficulty levels. Define difficulty levels. Do they match the cognitive skills or content rigor? It might create trust issues among professors and students. It could also harm the reputation of the teacher if he/she is well known for using AI tools.
Rights
- Teachers’ rights to learn how to get the best accurate results from AI tools. To learn the appropriate tool for each task
- Students’ rights to know which AI tools are being used and how
- Students’ right to being challenged by assessments; one student said that AI-generated test/quiz questions are often not very challenging because they are exactly the same as the content but with different language and structure
- Students with disabilities or learning difficulties may not have their needs met [this may happen with or without AI use, though]
Duties
- Duty of the professor: to learn first about AI before using it. Many AI tools are not reliable and teachers need to know these issues, e.g. ChatGPT fabricating references; most tools hallucinate some are very repetitive
- Professor’s duty to be transparent about AI use
Equity considerations
- Students with access to AI Pro version (paid versions) could get better results in answers to the quiz compared to those with a free version
Possible guidelines
- Teachers should expand the reference list to avoid repetition in quiz generation.
- Teachers could start with co-design for the course, and add the AI use and guidelines in the syllabus.
- Faculty members could focus their assessment on self-reflection, to challenge students into the “Who you are” discussion. i.e, challenge students and open space for becoming critical thinkers.
- Teachers tend to over assess their students. It should not be about assessment but more about learning and engaging in authentic discussions and topics. The aspect of assessment that I need to assess everything” should change. So we need a regulatory tool for assessment.
- Scores/grading might be pressuring instructors (and students). The institution/department can consider “ungrading” or other forms of grading.
- Instructors should not be forced to use AI in classes if not necessary.
Recommended Resources
CLT recommends that faculty members’ read:
- Laila ElSerty’s and Rania Jabr’s article Building Teacher Confidence by Making Responsible Choices to Teach Using AI which has a section on using AI for creating quizzes.
- the University of Pennsylvania’s resource Using Generative AI to Create Assessments which provides guidance on utilizing AI to design effective quizzes, exams, and evaluative rubrics.
- this “prompt library” for educators with a section for prompts related to creating assessments with help from AI.
Conclusion from all Three Part and Ways Forward
We have now published all three parts of this series, covering use of AI for syllabus design, giving feedback, and creating assessments.
At universities worldwide, we have been spending the last few years considering guidelines for student use of AI, but there has not been enough discussion about what educators’ use of AI is appropriate and ethical. The conversations at the symposium created a safe space for students and faculty to share their views and concerns about AI use by faculty members and showed us at CLT that the levels of critical AI literacy campus-wide were quite high: generally speaking, each table had participants who were aware of AI hallucinations and the need to always revise AI outputs, as well as concerns related to data privacy and bias when using AI tools. Participants seemed aware of the limitations of AI and its potential harms, while acknowledging ways it might be helpful in some contexts. Participants also noticed that while AI might help faculty members save time, and there might be some guardrails placed to protect rights, there were also some structural issues that may lead someone to use AI, but those structural issues (fictional issues in the scenarios related to class sizes, last-minute assignment of courses, and lack of support for new/junior faculty) can be addressed in ways that may remove the temptation or need to use AI altogether. There were also some suggestions coming out from some tables that emphasized the human relationship between educators and students, and recommended faculty members partner with students to co-create instead of turning to AI, ungrading, or using things like peer review with AI instead of purely resorting to AI. Across all groups, guidelines when using AI involved transparency not only as declarations of AI use, but also explanations as to why a faculty member would use AI and having open conversations with students, and requesting consent when students’ own data might be input into an AI tool. Moreover, several groups discussed the importance of learning about using the appropriate AI tool in the appropriate place with careful prompting, training of the tool, and critically evaluating the output for accuracy and bias.

