Overview
I recently listened to a Diary Of A CEO (DOAC) podcast episode that asked a provocative question: is AI rotting our brains? The hosts asked whether our growing reliance on AI is not just making us lazy, but actually degrading our mental muscle.
This discussion stuck with me while I use tools like ChatGPT almost daily, I have also caught myself wondering if it’s impacting my ability to think independently.
The “brain rot” argument
The podcast highlighted a growing concern, AI tools like ChatGPT, Gemini and Grok are so good at automating tasks that we risk skipping the struggle that builds skill. If a model can draft a report in seconds, why wrestle with a blank page all by yourself? Cognitive scientists call this cognitive offloading, when we hand mental effort over to an external tool. It’s not new (we did it with calculators and spellcheck), but the scale and speed of AI makes it more powerful than ever.
This concern isn’t new, whenever we invent tools that automate thinking, critics worry about skill loss. A recent Axios report on an MIT study suggested that generative AI might reduce creativity and problem-solving over time. In a similar vein, BigThink highlighted cases where recruiters who leaned heavily on AI for candidate screening became less sharp in their judgements.
This echoes Nicholas Carr’s famous warning in The Shallows, where he argued that digital technologies reshape our attention spans and cognitive depth. It is easy to see parallels with AI, the temptation to let it “think for us” is real, and it could blunt our critical reasoning if we are not careful.
The “efficient” counterpoint
On the flip side, AI isn’t just about cutting corners, it’s also about boosting productivity. The DOAC hosts pointed out that calling AI “brain rotting” overlooks its role as a powerful assistant. Instead of drafting the same repetitive email 20 times, I can let a model handle the boilerplate and focus on tailoring the message.
The Chronicle of Higher Education recently argued that fears of “brain rot” oversimplify things, students aren’t becoming zombies, they are adapting to new tools much like we did with calculators and Google search. There is also a compelling economic case, McKinsey estimates that generative AI could add up to $4.4 trillion annually to the global economy through productivity gains. That’s hardly laziness, that’s leverage.
History offers some perspective, calculators didn’t erase our ability to do maths, they freed us to tackle more complex problems. Spellcheck didn’t ruin literacy, it let us write faster with fewer errors. The real danger isn’t the tool itself, it’s when people use it passively, without engaging their own judgement.
My personal take
For me, the difference lies in intent. If I let ChatGPT write entire essays and never reflect on the content, then that’s laziness. But if I use it as a sparring partner, to challenge my ideas, spark new angles or improve my writing, then it actually makes me more engaged, not less.
Three tips to make sure you’re using AI effectively
- Don’t copy anything directly
If you paste AI output without touching it, you’re not really engaging with it. My rule is to make sure you copy it out yourself. By rewriting as you go, you process the information, understand it properly and engage your brain.
- Ask yourself: how would I do this without AI?
If the answer is something repetitive like searching for data, copying values, formatting references, then great. You are using AI as a productivity tool. But if the answer is anything creative, such as brainstorming, writing, coding etc then maybe think twice before outsourcing all of it. Find a balance, either use it to fill the blank page and change every word, or let it help you refine what you’re doing or have already created.
- Always add your own voice
AI drafts are generic by design, the easiest way to stop yourself from not engaging is to layer in your own perspective. For example, personal stories, examples from work or simply rewriting something in your natural tone. This extra effort makes the final product yours, and ensures its authenticity.
So, is AI rotting our brains?
I think that the better questions is. “what does our use of AI say about us?” These tools don’t come with built in ethics, discipline or motivation. I know that when I read something from someone else that was clearly made by AI I disengage, I don’t trust it and think that they haven’t put any effort in what I am reading. The DOAC podcast didn’t settle the debate, and I don’t think it can be settled. But, reflecting on it reminded me that I want to use AI deliberately. Not as a shortcut to avoid effort, but as a tool to push me into deeper thinking. That way, instead of rotting my brain, it is keeping me curious and helping me deepen my skills and knowledge. That’s the challenge, and the opportunity, of living with AI.