An Observation from Teaching Philosophy
Students increasingly lose the ability to share their thoughts in a convincing, clear, and detailed way. Before the days of Generative AI, Philosophy students typically gradually developed the character virtues, abilities, and skills to focus for hours on specific questions, wrestle with very complex and challenging texts, and eventually take their own position on quite abstract questions, trying to defend them over the course of a fifteen-page seminar paper in the clearest and most concise way possible. This was what made them stand out on the labor market and hence typically helped them secure a position in diverse fields such as consulting, complex government jobs, or similar.
A little more than three years after the publication of ChatGPT, Generative Language AI has transpired into all parts of education in Switzerland. In addition of the effects of frequent interactions with the technology on adolescents’ mental health (spoiler: it does not look pretty, see Ahmed 2025 for an exploratory study), the neurodegenerative effects also begin to emerge (e.g, Kosmyna et al. 2025).
However, what I encounter in my Philosophy classes is a more down-to-earth effect: Talented young people, bright and motivated, are increasingly struggling to express their thoughts in a convincing, clear, and detailed way anymore. Where they use Generative AI, I get somewhat boring, rather generic, but grammatically correct and acceptable essays and seminar papers. Where they do not, ever more often, I struggle to understand what they wish to convey for lack of cohesion, inconsistent use of concepts, and basic grammatical and orthographical errors. Hence: Students are losing the ability to argue their theses convincingly, to convey their thoughts as clearly and as simply as possible, and to develop the conceptual sophistication characteristic of Philosophy graduates.
Why should that be a problem?
This means that they lose the very ability needed to distinguish themselves in a labor market where AI is omnipresent. The problem with this development is not that the students will not succeed in their studies – their AI-Generated Essays are uninspired and quite boring to read. Much more so than the not-so-good first attempts by first-year students that had no access to Generative AI. Still, the texts are ok and pass. The problem is rather that they are making themselves superfluous by acquiring the skill of smartly prompting a Chatbot instead of the deep character formation necessary to think deeply, sharply, and dangerously innovatively. Any team leader at a consultancy, a government agency, or an NGO can watch a couple of YouTube videos to learn how to prompt an LLM so that it delivers quite impressive results. So, by reducing their education experience to becoming smart prompters, students might have chosen a very effective path to render themselves superfluous in the labor market. As a teacher, this worries me.
The Solution: Learn to Think Before You Start to Prompt
The solution is simple: Go Cold Turkey on GenAI during your studies. Seriously. It does not even matter whether you study Philosophy, Business Administration, or Computer Science, the basic principle is the same: In order to have a competitive advantage over GenAI in the labor market, you need to learn the basics of your discipline the slow, hard, and stupid way (incidentally, this is probably why my very first exam at ETH Zurich in Computer Science was with pen and paper). At a university, efficiency is not the goal. Formation of habits, virtues, and acquisition of deep skills is. You acquire those by doing the nitty-gritty work of a BA student yourself. Not because you will do these tasks without GenAI when on the job (there, it is about efficiency), but because it is the only way to acquire the grit, depth, precision, and flexibility of thought that will allow you to succeed in a labor market shot through with AI.
Leave a comment