How Does AI Impact Learning? University Education as Mindbuilding 

human head in gold surrounded by whisps and digital stuff - mimicking artificial intelligence

Imagine a bodybuilder in training, let’s call him Ronnie, who is about to attempt an 800 lbs squat. This time, however, his trainer has brought a new tool: the Weightlifter. “With this Weightlifter, your 800 lbs squat will be a piece of cake, Ronnie!”. Indeed, Ronnie lifts the 800 lbs without feeling the annoying aches and pains in his joints, and his lungs feel full and strong… thanks to the Weightlifter. “You can lift heavier weights than you could ever imagine!”. Suddenly, breaking the squat world record doesn’t require years of continuous dedication and sacrifices. The Weightlifter is a wonderful tool! After briefly basking in his success, Ronnie wonders, “Does this tool really make me a better bodybuilder?”. 

We face a similar situation in education with the introduction of Artificial Intelligence (AI) software, like Chat Generative Pre-Trained Transformer (ChatGPT). With such tools, students are able to generate (parts of) essays or answers to test questions within a fraction of a second. In a way, this kind of “help” is not new. One could ask a friend or a family member for assistance, or even pay some expert found on the internet. The difference is that the bar to get external support has dramatically been lowered. Where one would’ve previously had to find the right expert, agree on a (not insignificant) fee, and wait for the assignment to be delivered, one can now get all the answers immediately and relatively cheaply, even for free. 

In the case of our bodybuilder, Ronnie, we understand that the Weightlifter does not make him an accomplished bodybuilder. He needs to lift the weights himself. His goal is not to lift the weight because it needs to be displaced (because it blocks a passage, for example); he needs to lift because he needs to improve his muscles. Similarly, students do not write an essay because the professor has a hobby of collecting essays; students write an essay because they need to become better writers and engage with the content of the lessons. A university is not a place to find the easiest route to submitting an assignment; rather, a university is a place for mindbuilders. 

Building a body requires time, dedication, effort, perseverance, and challenges. If Ronnie isn’t sweating at the gym, something is wrong. What would this look like in the practice of building minds? The Swedish psychologist, Anders Ericsson, found a general principle of learning by investigating what the top performers in the fields of chess, music, and sports have done to become so successful. He found that they all apply the same form of practice, something Ericsson calls deliberate practice, which is, according to him, “the most effective and powerful form of practice that we know of” (Ericsson & Pool 2016, p. 53). Ericsson formulated a couple of conditions that yield deliberate practice, but I want to single out two features that particularly impact the students’ subjective learning experience: 

  1. “Deliberate practice takes place outside one’s comfort zone and requires a student to constantly try things that are just beyond his or her current abilities. Thus it demands near-maximal effort, which is generally not enjoyable” (Ericsson & Pool 2016, p. 228).
  2. “Deliberate practice is deliberate, that is, it requires a person’s full attention and conscious actions” (Ericsson & Pool 2016, p. 229). 

Of course, these principles for learning will be implemented differently depending on whether one studies math, biology, economics, or literature. But for students to become better mindbuilders, according to the best research we have, they need to struggle and be challenged, which is “generally not enjoyable”. 

So if students at home can choose between an activity that is generally not enjoyable and a means to easily circumvent this activity with the same result, namely, to produce answers to an assignment, it seems obvious which choice students will make—especially when it is nearly impossible to detect with the current tools available to teachers. Any appeal to one’s conscience or ethical principles becomes practically futile, as long as one can be invisible while cheating. Some students may resist this temptation because they are intrinsically motivated or because they adhere to strong ethical principles, but I think it would not be too farfetched to expect an overall increase of such misuse.

Chess engines are now ubiquitous in professional chess. But chess players do not compete with their own chess engine against one another. Instead, chess engines are used in training situations: for example, one uses the engine to analyze a specific position. In order to gain any benefit, the player needs to have completed the not-so-enjoyable part of finding the best move. If the engine spits out the best move, the chess player can compare this solution with the chess engine’s. But if the chess player has not put in any effort beforehand, any solution generated by the engine would be useless to the player’s self-development. 

Chess players are not evaluated by submitting solutions to their chess exercises to a chess university, but by actually playing chess against someone else. University students, on the other hand, are often evaluated by the same activity they need to do to become better: the training becomes the tournament. Maybe by disentangling the training from the tournament, like in chess and bodybuilding, we can find a way that the rise of readily available AI tools will not impair learning. We may disagree on how this can be accomplished, but we should agree that a university aims to develop and cultivate the minds of students, and not produce consumers of pre-trained chat bots. 


Ericsson, A. & Pool, R. (2016). Peak: Secrets from the New Science of Expertise. Houghton Mifflin Harcourt: Boston, MA & New York, NY.

Header Image by Gerd Altmann from Pixabay

Mario Hubert
Assistant Professor, Department of Philosophy | + posts


  1. I agree completely but you left out the most important/interesting part. How do you suggest we use the rising AI tools will benefit rather than impair learning. The sooner we come up with ideas and start discussions about this part of the problem the better. You seem to be suggesting a strategy when you discuss how chess players use AI to better their skills. But I think this needs to be spelled out more clearly.

    1. That’s an interesting question how AI can benefit and not impair learning. There is in general a tension between learning and technology: technology (digital or not) is made to take over work from us (either completely or partially), learning on the other hand requires that we do work ourselves. The question is then: how can we use technology so that we can learn something better? Or how can we use technology to focus better on a particular learning outcome?

      I hear a lot the following argument, “We have allowed students to use calculators, so students should also be allowed to use AI.” There is a problem with this analogy. Calculators take away from us the task of calculating; therefore, we do not learn how to calculate by using calculators, and therefore, we do not allow calculators in primary schools, when young kids learn how to calculate (at least, when I was young, we didn’t use them). But later when students know how to calculate, they are allowed to use calculators for a particular problem, where the main goal for them is not to learn how to calculate but to solve a different mathematical or physical or economic problem. The calculator makes it possible for students to focus on this learning outcome, instead of struggling with how to add and multiply big numbers. Nevertheless, we can train students to become lightning-fast calculators without using the calculator before they take a class that requires them to apply their calculation skills. But this would take quite a lot of time, and we have evaluated that this time to learn how to calculate can be saved by using a calculator and this “lack of skill” of calculating large numbers is something students can live with, unless they want to win some championship in Mental Calculation. Calculators are fine for a physics class, but are inappropriate for a mental calculation championship.

      The problem with upcoming AI programs is that they are able to write essays and do the assignments for students, like human beings (I assume it will be just a matter of time until these programs become really good). So these programs are able to take over the entire task of the students they are supposed to do themselves. The barrier for using them for at-home assignments are very low and it is practically impossible to prove the use of them in the submitted assignment.

      That’s different in chess. The chess player can’t use a chess engine in a tournament. There are basically only two activities that a chess player needs to consistently do to become good: play chess and do chess exercises. A chess engine can help the chess student analyze games and chess exercises, and it can replace a human opponent in a game. For analyzing games, the chess player needs to understand the situation on the board in order to understand any suggestion of the engine, which requires practice. So a chess engine is rather a teacher-replacement tool, but it can’t replace a proper teacher because it can’t explain to the student what is going on on the board. Especially, unexperienced chess players won’t get much from a chess engine.

      The only thing that I can say about the use of AI in teaching is the following: If AI can be of any use to a student, I think, its abilities need to be restricted to not complete the whole assignment for the student.

      Please let me know whether this makes sense, and I would be very happy to have your opinion on that.

  2. Very nice perspective, Dr. Mario. I like the balance you bring to the topic. I wonder what a “chess tournament” in writing would look like?

    1. The chess tournament should illustrate the real-world situation were a skill is displayed. In writing, it would be the process of completing a scientific article or a newspaper article or any other kind of article for a particular purpose. It’s a long process, as very few articles are written in one session. The problem I see in writing assignments for students is that these assignments are the training tool and the tournament situation.

  3. Thank you very much for this insightful and intriguing piece. It raises questions to me about teaching and learning about literature. If literature courses need to follow the general ILOs in relation to Bloom’s taxonomy, then the question that AI raises in my mind is about its abilities that can affect the processes of reviewing, analysing, and basic researching. But to what extent can it evaluate and design?! This in turn raises another question: is AI actually affecting the assessment processes more than the learning itself? And in that case, how should we go about “disentangling the training from the tournament” in ILO-oriented structures?

    1. That’s an interesting point. I think from chess we can learn that AI can be useful as an assessment tool in teaching. In what way, however, I’m not sure yet. We can also learn from chess that chess engines have not made real teachers superfluous. I think “disentangling the training from the tournament” is a key issue in university teaching. We need to find proper ways for students to practice and then, after enough practice, to apply these skills in some form of test. But at the moment, a lot of practice in university classes coincides with testing, which results in grades, which are recorded in the students’ GPA. I also think that this collapse of practice and grading contributes to the students’ stress, as students may not feel “ready” for being graded.

  4. Thank you for this short but pithy presentation of this as-yet misapprehended digital curve-ball… it sound like Covid: no one really knows yet what are going to be the full implications of it.

    I am all for progress and adaptability and new horizons but I would like to suggest that we are still looking at the phenomenon while in full control of our thinking faculties, being as we are, still products of an environment of mental academic knowledge-based system of understanding. But what of the younger generations who have not had the chance to reach this stage of critical maturity and comprehension? How will they be able to develop the necessary wisdom to control -rather than endure – such a tool without allowing it to abuse them? How will they know any better?
    First we must ask the question: what is success? and if we answer that it is failure, then learning, then failure then learning and so on, then AI and the likes will be depriving our students from the very opportunity of success by eliminating failure to a great extent. In this case, the ability to succeed is to think for oneself, by going through all the stages of critical analysis. If we let AI control the mental development of our students in even the tiniest way, through the inevitable butterfly effect, wouldn’t we be condemning them to a total manipulation of their minds by an amorphous and unpredictable monster? Somehow those sci-fi horror movies seem so plausible now…

    1. You raise very important points, and I’m fully on board. 1. We may not know yet the total implications of AI. I tried to illustrate some principles of learning that might give some reference for possible implications and application, but how AI will ultimately be used is, of course, unknown. 2. Failure is crucial to learning, as is success. I think there is a general aversion to failure in university learning experiences. One big factor is the focus on and the importance of grades. AI has the potential to further contribute to this aversion, because it can take away crucial work that students need to do themselves.

      1. I agree that grades get in the way! I think grades feed extrinsic motivation. The more intrinsically motivated person is way less likely to take a shortcut, because they’re enjoying the task for its own sake. Grades make failure count, too.

  5. Why would a bodybuilder care about squatting 800lbs? Do you mean to be talking about powerlifting?

    1. Good point! At least, Ronnie Coleman did.

  6. I strongly agree with the idea that if we can relegate AI to this teaching role to students in the same way that a chess engine provides a practice role to chess players and bodybuilding trackers automates the collection of important metrics for bodybuilders in order to maximise outcomes (not create the outcomes for them), we can instead make AI a catalyst for students to leverage in the learning environment.

    1. Yes, this would be also one way to incorporate AI in teaching I can envision. AI could, for example, generate test questions or multiple choice questions that students can practice with. I see a real advantage for students doing many small exercises in addition to larger assignments posed by the professor. Another way would be to use AI in advanced classes, so that the AI does some specific work for students (like a regression analysis, etc.) for a task that has a different learning outcome than the one fulfilled by the AI (similarly to using a calculator or a word processor). In this way, students would be able to focus on complex problems without being too much sucked in some tedious work that is only an intermediate step in reaching the goal. In other classes, however, this intermediate step could be the learning outcome, and here using an AI for this task would not be appropriate.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: