Member-only story

Why most PhDs misuse AI writing tools

8 min readApr 20, 2025

(And 6 strategies to use them smarter)

Illustration of a sad PhD student looking at their AI-generated text.
This is what DALL•E “thinks” PhD’s faces look like when they use AI writing the wrong way.

When ChatGPT first emerged, I spent three weeks in my university office pretty much with the door locked for most of the days, testing what it could actually do for academic writing. As a PhD who was deeply curious how experts integrate AI into their workflow, I needed to know: Was this the academic equivalent of discovering fire, or just another shiny distraction? And I went to work on it hard. Until then, I was mostly known for my games and gamification research in HCI, but over the last year, I’ve been obsessed with understanding generative AI writing.

So, what’s my current verdict? Most AI-generated academic text feels like fast food (sorry, my intellectual mavens): Convenient, predictable, and utterly forgettable. You could call it a faster way to mediocrity. It averages out any intellectual stimulus that you possess.

That’s fine if you’re just drafting routine emails. Not so great if you’re trying to contribute meaningful research to your field. But, hey, it’s become highly popular since.

Here’s what most academics miss though: AI doesn’t have to dilute your writing but it can actually elevate it when used strategically. I’ve been analyzing hundreds of research papers and I keep discovering fascinating patterns in how successful scholars can use these tools…

--

--

Lennart Nacke
Lennart Nacke

Written by Lennart Nacke

🧠 Tenured brain, weekly drops. Maximum citations but sanity questionable. The prof your prof follows for research & AI ideas. University Research Chair.

No responses yet