Examinations: Are we really Assessing Our Learning Outcomes?

I always ask myself this question: am I really fairly and effectively assessing my students’ learning
outcomes by examinations? The answer never fails to put me ill-at-ease!

We are placing our students in a tremendous amount of stress pretending to have a successful way of
judging who deserves an A or and F based on such a method of assessment: eExams. In my personal
opinion, one of the most important criteria of any assessment is the feedback received by the student
after submitting the assessment. This feedback allows the student to correct his/her path and do better in his/her learning track. Indeed, what form of feedback other than grades does the student receive after his/her final exam? And how can he/she correct his path; isn’t it too late?

But first let us look at how we currently perform examinations, and I will focus on finals. The extreme
case are the final examinations which are offered to large group of students in many of our universities
(note: of the following bullets do not apply to AUC). A successful system is adopted for

  • printing the exams and submitting them to what is called “control-room”,
  • delivering the exam to the students in large examinations halls at a specific date/time,
  • collecting the answer sheet and hiding the student’s ID from the instructor,
  • grading the exam and returning it to the control-room,
  • summing the grades with other year work activities, and
  • presenting the final grades to the students.

A successful system which has no objective what-so-ever and just indicates a mistrust in the ethics and
integrity of instructors. No feedback, other than grades, is given to the students, and everyone still thinks
that this is a very successful method of measuring and assessment; but what does it really measure and/or assess?

Moving to a higher level and talking about what is currently followed at AUC and called final examinations.: A specific date/time is pre-set for the instructor to deliver final examinations to the students. Worst case scenario is a closed book examination; and on the other side, we can find what is called a cheat sheet or even an open book examination which may reach what I would call open resources (including computer and internet) examination; but the latter are very rare. Instructor marks these finals knowing the identity of each student, so he/she may have a small margin to account for the semester’s work of the students during marking the final; however, most marking is independently done for the final examination and its grade is just added as a percentage ranging between 20% and 50% to the other course activity grades. Once again, the same question still holds water: what are we assessing and what do we/our students gain? Do we assess 16 weeks of classes, studying, activities, etc. in 2 to 3 hours which is the time slot of the exam? Do we consider the stress we place on our students for an objective which is actually questionable? Can we recall when we were undergraduate students and we had to go through 2 weeks of successive examinations to pass a year-work; wasn’t that a nightmare? No matter what the answer is to this last question, I will use a quote which I have heard once and always adopt “students are of today, professor are of yesterday”. And even re-emphasize my point using John Dewey’s quote “If we teach today’s students as we taught yesterday’s, we rob them of tomorrow”.

On the path to my proposal which will reveal itself by the end of this article, I have experienced a new
paradigm of examining my students which has proven to be very rewarding. Recently, and while attending CLT workshops, I came to believe that what we have learned and excelled-in for some time is now (or will be soon) surpassed by something which works better. Thus, an epiphany which I encountered during one of these workshops led to what I call “students-generated examination”. I have tried it over two mid-terms during two successive offerings (Fall 2015 and Fall 2016) of a construction engineering course called Structural Mechanics, and here is the story.

I agreed with my students that each one of them will submit a mid-term examination with its model answer and marking scheme at a certain date. We specified the topics that should be covered by this examination. We also set the general specifications for the exam paper and I asked the students to clearly indicate distribution of grades to each question based on their model answer and marking scheme. I specified 50% of the student’s mid-term grade would be on designing the exam and its model answer.

Then, at a specified date, the examinations (without the model answer, of course) were randomized and distributed to students. So, every student was asked to solve an examination which was designed by one of his/her colleague and submit his/her answer sheet to me. The exam took the open-book/resource format and the remaining 50% of the mid-term exam grade went to the student’s solution of his/her colleague’s exam.

It was really fun, no stress on the students, relaxed atmosphere, and I have gained a lot from this
experience. For example, I knew from the range of items covered by students’ questions which aspects
are the most important aspects of my course from the students’point of view and what were the overlooked or the over-represented topics. I have even learned a lot from the students’ answers to their own questions via the model answer and the marking scheme. This was the strength of such a technique; on the other hand, minor weakness arises which can be easily dealt with in my next application: simply in the unequal examination difficulty levels.

This was just a trial among others leading to my ultimate goal: cancelling examination from my courses
and replacing them with many other forms of assessment. But, that is another story to tell.

+ posts

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: