
In this newsletter, we are continuing from part 1 of sharing the approaches of various AUC faculty to the existence of generative AI in our lives. We have divided the newsletter by school to make it easier for readers to find someone from a similar discipline. Still, you will notice such diversity in the ways people within similar disciplines have been approaching AI, from those who encourage use to those who give some restrictions, to those who spend time experimenting with students and promote ethical use, to those who find ways to try to restrict uses that can interrupt learning and combinations of all of these. Today’s newsletter is part 2 (you can read part 1 here), covering some faculty from ALA, Business school, HUSS and SSE. We will publish part 3 soon, covering other schools and more faculty members from all schools.
Academy of Liberal Arts
Heba Fathelbab, RHET and CORE
The approach to using AI in teaching and learning focuses on equipping students with the skills to effectively and ethically use AI tools, recognizing that without this knowledge, they may be at a significant disadvantage when they graduate and enter the job market or workplace. AI-generated content is utilized as a starting point for research and writing, but students are required to critically assess this content for gaps or misinterpretations to ensure their own academic voice remains dominant. The emphasis is on fostering critical thinking by encouraging multiple revisions and personalizing arguments, supported by clear rubrics and instructor feedback adapted specifically for AI integration.
In my classroom, I position AI as a support tool that helps students generate ideas without replacing their own thinking or voice. I require students to show all their process work focusing on critical thinking and student voice. I do that through a variety of tasks such as asking students to manually revise AI-generated drafts, clearly marking changes to ensure they maintain ownership of their work. I also embed brief reflective writing activities where students explain their revision decisions and how AI influenced their thinking, which deepens their engagement with the AI experience.
Assessment in my classes now focuses more on the critical thinking process throughout the writing journey rather than just the final product. I use rubrics that value both the process and the product with a clear focus on responsible AI use, originality and thoughtful engagement with ideas. I also think it is important to continue having discussions with the students about ethical AI use, including honesty, citation, and transparency. I believe this approach helps students integrate AI responsibly while developing authentic, critical learning.
Onsi Sawiris School of Business
Hakim Meshreki, MGMT/MKTG
Artificial Intelligence (AI) has emerged as an inevitable and transformative reality that academic institutions must proactively address. Faculty members, depending on their respective disciplines, are generally faced with two prevailing approaches. The first is to prohibit the use of AI altogether—a stance that, in my view, resembles the resistance to the internet during its early adoption phase. Given the proliferation of AI tools and humanization software, detecting AI-generated content has become increasingly complex. More importantly, such a restrictive approach risks forfeiting a valuable opportunity to guide students in the responsible and constructive use of AI technologies.
The second, and more progressive, approach is to embrace AI and actively mentor students on its ethical and productive application. This includes leveraging AI to enhance efficiency without compromising academic integrity or the learning process. I have personally adopted this approach across several of my courses.
In my Marketing Research course (MKTG 3201), students are encouraged to utilize AI tools at various stages of their projects. For instance:
- During market sizing exercises, students use platforms such as Copilot or ChatGPT to gather preliminary data on Total Addressable Market (TAM), Serviceable Available Market (SAM), and Serviceable Obtainable Market (SOM). They are then required to triangulate these findings using multiple credible sources.
- For secondary data analysis, particularly when reviewing existing literature, I recommend the use of Elicit.com to identify recent and relevant academic publications.
- In the data analysis phase, students first conduct statistical analysis using SPSS (a non-AI tool), and subsequently employ AI to derive deeper insights and interpretations.
- When teaching data-driven segmentation, I have used AI to assist students in generating segment names and descriptions based on their analytical outputs.
However, I also impose boundaries to ensure skill development. For assignments involving hands-on data analysis or mystery shopping (a kind of observational technique used in marketing research to assess service quality, so it cannot be done with AI, they have to experience it themselves), AI use is explicitly prohibited to foster proficiency in both quantitative and qualitative methodologies. Conversely, in the final project report, students are encouraged to use AI to refine their writing and synthesize findings, thereby enhancing the professionalism of their submissions.
In my Business Strategy course (BADM 4001), I introduced students to Jeda.ai, a graphical modeling tool that facilitates environmental scanning and strategic analysis (e.g., PESTEL, Resource-Based View). I designed an assignment where one group utilized Jeda.ai while another completed the task manually. A subsequent debate session allowed students to critically compare the quality and depth of insights generated through both approaches.
I remain committed to exploring and integrating emerging AI tools that can enrich student learning and academic development. The goal is not merely to adopt technology, but to cultivate a pedagogical framework that empowers students to use AI responsibly, creatively, and effectively.
School of Humanities and Social Sciences
Noha Abou-Khatwa, ARIC
I am not allowing them free usage of it yet, but I am integrating its output in assignments for them to critique and fix.
I started from a place of ignorance and fear of an unknown that presented itself as a nuisance more than anything else. My first encounters were centred on reflection papers submitted by students that were nothing more than paragraphs of generic nonsense about Islamic art. It was frustrating and annoying to say the least. My initial reaction was to ban the usage of AI. Semester after semester it only got worse since the bots became better, and it was very difficult to prove that they were used.
I had a shift in mindset last September as I started to converse with the students about AI and with the help of the CLT it became more and more evident that we can’t bury our heads in the sand!
I decided last year to incorporate assignments and activities for students to help them develop their critical thinking faculties and understand that they are the authors of their own submissions and as such they are responsible and accountable for the originality and integrity of their work. In an introductory course on Islamic architecture, I devised an assignment where students go through and correct an AI generated powerpoint presentation discussing monumentality in Islamic architecture. Alongside the presentation I sent students an article on the topic and asked them to read it and use it to amend the information and images in the presentation. The students enjoyed the exercise and learnt that they can’t take AI’s work for granted, they need to be able to corroborate and critique it using the proper sources. I gave them the option of leaving the design of the presentation suggested by the AI bot, and they also were given the choice to leave the outline the same. We also go through in-class activities during the middle of the semester where they compare the speed and efficiency of using AI to their own visual memories to identify buildings and objects in images. This exercise was devised to help them understand why, as art historians, we need to train our visual memory.
School of Sciences and Engineering
Alia el Bolock, CSCE
I believe that AI should be treated as an inevitable partner in education—not a threat—and that what matters most is how we help students learn with it, understand through it, and judge beyond it. In my classes, I design assignments knowing that students will use AI; rather than banning or resisting this, I adjust learning outcomes, rubrics, and assessment tasks so that using AI doesn’t sidestep learning but serves it. This way, we can encourage students to use AI smartly to improve themselves.
In practice, this means teaching students to approach AI not only as users, but as builders, critics, and innovators. I allow AI use openly, but make sure that assignments demand more than simple generation—they must analyze, adapt, or challenge AI outputs, ensuring they understand the underlying concepts. I also highlight where AI fails: asking students to test its limits, identify errors, and reflect on why the technology breaks down. This cultivates critical judgment and prevents blind trust in these “magical” AI tools that promise everything.
Equally important is to embed ethical considerations into every discussion, like bias, fairness, privacy, and the consequences of overreliance. By weaving AI into both technical content and reflective practice, students learn not only how AI works, but also when it can help, when it can mislead, and when it risks drowning them. This way, students leave not just technically capable, but responsible, independent thinkers who know how to harness AI while still learning how to learn.
Ibrahim Abotaleb, CENG
I allow students to use AI as long as they critically analyze the output. I add a layer of assessment, which is oral discussion to ensure that they truly master the assessed topic instead of just relying on the AI output. As expected, not all students are the same. There is a spectrum of dependency on AI. Some students do not use AI at all and rely completely on themselves. This is good for fundamental engineering concepts, but makes students slow in tasks that require writing code. The other end of the spectrum is a disaster; which is relying completely on AI and submitting the AI output without even looking at it. I found it extremely difficult to embark on any changes to those students, and they receive terrible grades. Their performance in internships is also terrible. The healthy part of the spectrum is the middle part, where AI helps students in re-explaining some concepts, writing code, verifying solutions, without being the sole source of knowledge generation or skill enhancement.
Mohamed Darwish, CENG
Unfortunately, not all students are aware enough of when to use AI and when not to use it and within which context it should be used and how it should be used in different circumstances to produce results that are accurate enough. I assigned students seven different assignments within a 400-level course titled “Construction Methods and Equipment II”. Each of the seven assignments covered a different topic related to construction technology. Within each assignment the students were asked to redo one of the problems that they solved manually using an online AI and compare the answers produced by the AI to the answers each student produced manually. The level of success of AI in achieving the correct results was assessed and represented as the percentage of correct answers. The percentages of correct answers varied according to several factors including the level of experience of each student in dealing with AI, the level of sophistication of the problem assigned and the AI tool used.
Nabil Mohareb, ARCH
My approach lies on the side of permitting students to use AI in a responsible and thoughtful manner. I encourage them to leverage AI tools to save time and enhance creativity, particularly in architectural design and academic research writing. However, I emphasize the importance of ethical use, critical thinking, and ensuring originality in their work.
In every course, I teach a session on using new AI tools relevant to that course, and I always tell my students, “garbage in, garbage out.” This means that if you give an AI a vague or general prompt, you’ll get a vague, general, and often irrelevant response.
The key to getting a useful outcome is to be specific with your prompts. The more detailed and clear your question is, the more accurate and helpful the AI’s answer will be. You, the user, are responsible for the outcome, not the AI.
Stay tuned for part 3, which will include more faculty members across the disciplines.
How are you addressing AI in your courses? Tell us in the comments!