This article is part 2 of 3 articles on the CLT Symposium session on Exploring Dimensions of Faculty Use of AI in a Liberal Arts Context. Part 1 was focused on using AI to design syllabi (link). This is Part 2: Giving Feedback. The article reports on student-faculty-staff interactions around a fictional scenario. We share the scenario and the summary of reflections, then provide some recommended resources for faculty.
Scenario 2: Giving Feedback
Description
A faculty member is used to teaching a class of 15 students, but this semester, a decision was made to increase the class size to 30 students. The class has a lot of writing-based assignments, and the professor does not have time to grade all the assignments, so they upload their rubric onto an AI tool, and then upload each student’s assignments into the AI tool and prompt the AI to provide feedback based on the rubric. They only do this for the first draft of the assignment, but the professor looks at the second draft personally and grades it.
Findings
Consequences
- Positive: saves time for the professor if the feedback ends up being good
- Negative: AI tool might misunderstand or misinterpret rubric or make mistakes/hallucinations, which is unfair to students; possible severe biases in grading; AI can easily miss important information and points if not prompted correctly or accurately; maybe unfair to students since AI gives very variant feedback on identical work; AI can produce a lot of errors, how do we know if the faculty member is using a good prompt?
- Faculty will not be able to grade the second draft well or fairly since he handed all of the first draft to AI; second draft may be graded using different criteria which would be problematic; how do you know the AI feedback will help you get a good/appropriate grade? Not the student’s fault if AI feedback was not comprehensive
Rights
- Students’ rights to having an expert human review their work as they can already put their own work into AI for feedback
- Students’ rights over their own data and how it is used: some students’ personal data or reflections that are now forever in the database of the AI platform which can then use it for its own training; for some assignments the ethics of this are more complex (e.g. when the reflections of students are very personal)
- AI feedback means there is no personalization of the feedback given to students. Students have a right to some consideration for their personal situations.
Duties
- Duty of professor to give us personal feedback as expert
- Importance of instructors establishing rapport and personal relationships with their students. Automated feedback defeats this purpose.
- Faculty’s duty is to grade student work and provide guidance and feedback
- Faculty duty to thoroughly revise the feedback AI gives (if they insist on using it for this scenario)
- Duty of the professor to be transparent about use of AI from the syllabus; Also, the instructor needs to share the prompt and tell them that this prompt will be the same as the grading rubric.
- Duty of professor to get permission from students before putting their work through AI
- There is an invisible contract between students and their professors; students expect to receive personalized feedback from professors. Other than that, why are they going to uni in the first place?
- Instructors need to share with administrators first that this is the way they will be giving feedback. Students can go to admin if they have an issue with this.
Equity considerations
- Students with accommodations are assessed like those who don’t – very unfair
- Students who have experience with using AI to give them feedback reported that the same AI tool gives different feedback when the assignment is run again and different AI tools give different results.
Possible guidelines
Solutions that avoid AI use:
- Address the administrative issues in the first place (why are we doubling the class size in the first place?)
- Have the first draft be peer reviewed, so that faculty can get a quick sense of each paper before grading, which could save some time
- Request 2 TA’s if 1 is not enough to grade all quantitative work and handle logistical questions so that faculty have time to do the real grading and feedback
- Consult experts (e.g. CLT members?) about how faculty members can change the assignment from paper to presentation or another format to fulfill same learning outcomes whilst requiring less time from faculty to grade
- Important that the first draft specifically is considered/graded by the professor himself/herself to assess progress later.
- Feedback needs to include not only how the assignment can be improved but also why a certain change to the student work is important.
- The relationship, especially feedback, between professor and students needs human interaction.
- Purpose of the writing assignment matters. Maybe the instructor needs to change the grading criteria altogether.
Guidelines if AI is going to be used:
- Primarily: Professor is responsible for giving personal feedback, and not using AI feedback, or if you using AI, not to use its output as is without adding their input
- Professor be transparent on syllabus if using AI: share the prompt; explain how the grading will take place, and clarify why AI is being used
- Professor must get permission from students if using AI for feedback
- Maybe AI feedback is more suitable for later drafts, rather than first.
- Maybe use AI for proof reading and to give feedback on structure but not to give feedback on content and ideas: acceptable if used to give feedback about mechanics, punctuation, proof reading, or grammar
- Instructors need to learn to use AI prompt writing effectively. Workshops and sessions to be offered?
- AI tools used need to be fully trained on syllabus, content, and the instructor’s way of grading and their approach, etc.
- Combine AI feedback with peer feedback (PAIRR), using AI to do peer feedback. (AUC instructors Rania Jabr and Laila AlSerty have tried this)
Recommended Resources
CLT recommends that faculty members’ read:
- Marc Watkins’ article outlining the dangers of grading with AI, which we republished recently.
- Laila ElSerty’s and Rania Jabr’s article Building Teacher Confidence by Making Responsible Choices to Teach Using AI which has a section describing the PAIRR approach of combining peer review with AI feedback.
- UCDAVIS University Writing Program on the Peer & AI Review + Reflection (PAIRR) approach
Stay tuned for part 3 on using AI for creating assessments.

