Dagmar Monett’s Post

With a few exceptions, the essence of my experience reviewing or grading journal and conference papers, students' essays, Bachelor's theses, Master's theses, Doctoral theses, research proposals, etc., in the last three years can be summarized in three words: exhausting, disappointing, depleting. I'm definitely tired of reading and having to grade or review unauthentic writings, polished texts that promise much but say nothing substantial, entire paragraphs that circle around uninteresting voids without any depth. I'm unquestionably tired of having to waste my time giving feedback to machine-produced content. It's very damaging to me because I truly need that time to read, write, and do research myself. I'm undoubtedly tired of thinking about how the content before my eyes might be a poor combination of words other authors wrote before, now used and abused (and often misused and seldom cited) in a decontextualized way. I'm emphatically tired of reading about pitiful and incoherent bandaid "solutions"—e.g. degenerative "AI"-assisted peer review (what poisons the scientific and academic systems) or hype-friendly guidelines for the use of "AI" (artefacts of critical washing) or more degenerative "AI" tools to fabricate even more slop—that obscure and encourage the use, abuse, and misuse of plagiarizing technology. Plagiarism, either as cologne or perfume, always smells the same and triggers the same reactions. In 2025, my aversion to reviewing and grading increased exponentially. The joy I once had reading others' works plummeted. A few exceptions (e.g. books) have been like oases to find solace, but I know that won't last long: it doesn't seem 2026 will reverse the trends. To alleviate the tolls I... - decline the review of papers (I now accept to review a nanoparticle of what I accepted before. My default answer is No now). - don't share or cite works where degenerative AI was used (if the disclosure is made at the end of the work and I didn’t read it earlier). - don't even read them if the authors declare upfront the use of degenerative AI (there is much more to read out there that deserves my time and attention). - am explicit, especially when giving feedback, about what it means to have to review something that lacks authenticity, honesty, and effort on the other side. Above all, what is missing when those capabilities are offloaded to "AI." - reduce drastically the amount of homework, in-class exercises, and examinations, where degenerative "AI" might be the way to go, and substitute them with adequate forms of knowing where students' development is at. - reject, oppose, defy, repel, resist, counter, and ban, if in my hands (e.g. in examinations), the use of degenerative "AI." - write about these topics, when the time and mood allow it, for me but also for others, in case they may find something useful about them. That is one of the reasons why I still write on the ever-deteriorating LinkedIn. And I'm not alone!

  • text

Thank you, Dagmar. I feel your exhaustion, disappointment, and disillusionment. Just out of curiosity: Do you differentiate between using AI for the ideation and writing process (the core of intellectual work) vs AI use for expressing oneself more accurately and correctly in a foreign language, where being a second language speaker may be a disadvantage and actually hamper the dissemination of one’s thoughts? I don’t use AI at all for “content production” (everything I write is my own intellectual work) but I do use it occasionally for polishing my English (yet always on the basis of something I have already written, which contains all my thoughts, and always working the suggestions through word by word afterwards, discarding everything that does not feel “like me”.) Would you still refuse to read the text if AI was used like this? P.S. This is written entirely without AI.

Glad I got out of academia in 2016 Just in time ;-) ps: I am sorry for you

You're right and actually have hit on one of the issues in that many of those promoting education strategy have not themselves been directly teaching to any large extent in the past couple of years and unaware of the real world impact . Similarly if you havent been an editor or reviewer of journal papers, then the tide of ocean-borne microplastics hasnt hit you yet!

Dagmar Monett the same hollowing out of cognition and knowledge is happening in business too - where it will impact the person on the street more quickly. Business is of course just the tip of the iceberg of human philosophy and knowledge, most of it ultimately a product of people pursuing knowledge for its own sake, but since that tip is what sticks out that’s where the Titanic hits.

Back to hand-written answers in 2-hour pencil and paper exams. Back to grading on ability to express answers in way that is understandable to the reader and is based on the individual’s ability to integrate and think. I would rather spend 24 hours grading on hand-written responses than garbage generated without thought or effort.

Thanks Dagmar for sharing this important issue! I and many of my colleagues face this issue with our students who are non-native speakers of English. I believe that AI use in writing should not be treated as a blanket “not allowed” practice. There is a clear difference between asking AI to generate ideas or arguments and using it to improve language clarity and accuracy, particularly for students whose first language is not English and who are already at a disadvantage compared to native English-speaking peers. When the ideas and thinking are entirely one’s own, and AI is used only as a language support tool, this seems to me both reasonable and fair. This also depends on the purpose of the writing. If the aim is to teach students how to write in English, then AI should not be used, even for improving clarity. But when writing itself is not the learning objective (e.g. writimg a reflection on the use of certain CAT tool in translation), using AI to enhance language clarity can be acceptable, as long as the ideas are one’s own and the student remains fully responsible for every word and meaning expressed. Yet, I agree that alternative methods of assessment need to be explored (e.g. interviews, oral defence, in-class writing, etc).

Like
Reply

My article detailing the complementary relationship between AI and human cognition has just been published online https://doi.org/10.1177/23294906251399540

From my experience, this aligns with what Prof. Dagmar Monett observes: industry is no different. The challenge goes beyond AI-generated text. Too often, code is produced with AI assistance without a real understanding of logic, assumptions, or failure modes. When core skills such as debugging, error analysis, and reasoning about control flow are missing, the result is software that may run—but is fragile, hard to maintain, and risky at scale.

Like
Reply

Perhaps its time to start using this "Degenerative AI" to learn how to teach. If teachers set their curriculum from brainless guide books that lack any personalization, then its no surprise that students are using AI to give back what they are given. Instead of dealing with the new transformative reality, and having to discover new ways to educate, teachers are complaining that their outdated ways are not being respected by the incoming classes.

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories