Thinking first, AI second
An approach to using AI that ensures you benefit from it, without damaging your thinking skills.
One of my biggest concerns around AI is the possibility it will undermine my ability to learn and problem solve. This fear has come up repeatedly for me: some months ago I quipped that "I'm not a good enough programmer to be ready to stop learning" as the reason I didn't want to use AI to code.
I'm not the only one worried about this. As with everything in AI, it's early days, but studies suggesting that AI usage reduces critical thinking skills are already being published.
From a study done by SBS Swiss Business School:
The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. Furthermore, higher educational attainment was associated with better critical thinking skills, regardless of AI usage. These results highlight the potential cognitive costs of AI tool reliance (AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking by Michael Gerlich)
And Microsoft did a survey focused on knowledge workers:
higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking (The Impact of Generative AI on Critical Thinking: Self-ReportedReductions in Cognitive Effort and Confidence Effects From aSurvey of Knowledge Workers)
These results are not surprising. There is value in making your brain put in effort. For example, writing a difficult essay forces you to:
- Thoroughly think through and reflect on a topic
- Practice articulating ideas
- Learn to tolerate challenges
Like physical muscles get weaker with disuse, our ability to think critically, to reason, and even to learn, declines if we start outsourcing those tasks.
But: AI is here. It's powerful. It's being added everywhere. Who wants to be the one human who isn't AI-powered?
I've stumbled into a solution that works for me. I'm finding a way to make AI very useful for me, without feeling like it's weakening my intelligence: reviews and feedback.
The process looks like this:
- I do the thing.
- I iterate on the thing, taking time to reflect, letting it percolate etc.
- Then, when I feel I'm done, I ask AI for feedback.
I've used this a few times now, most recently on my website homepage. Visual design is a pretty major weakness of mine, so I spent quite a while pushing myself to be a bit less minimal, playing around with whitespace, trying to get it to look "right". I was ready to set it live when it occured to me it would be worth asking AI.
And sure enough, the AI (Claude Sonnet 3.7) was helpful. Claude is the reason my homepage has calls to action at the top, not just the bottom. And some colour on the services grid. I didn't take all the AI suggestions (Claude would have gone a lot less minimal) but it was definitely helpful.
Note what this isn't:
- I'm not giving AI an idea and asking it to draft from scratch for me.
- I'm not even asking AI to review a first draft.
AI is the last step before I publish. This means that I get value from AI without switching off my brain.
There's no reason you can't involve AI at multiple points, following the same principle: for example, when planning, come up with your plan, then get AI to check the plan. But don't create the plan, then ask AI to implement it.
sequenceDiagram
autonumber
loop Have an idea
Human->>Human: Reflect, explore, imagine.
end
Human->>AI: Here's my idea. Do you have any suggestions to add or any resources to recommend?
AI-->>Human: Suggestions and learning resources
loop Make a plan
Human->>Human: Plan the work. Define your project.
end
Human->>AI: Please review my plan. I'm trying to achieve [goals].
AI-->>Human: Suggestions
loop Do the thing
Human->>Human: Create! Do multiple rounds of edits.
end
Human->>AI: Here's my [article | design | code | etc.]. I'm trying to achieve [goals]. Please give feedback.
AI-->>Human: Suggestions (possibly ego-crushing)
loop Consider the feedback
Human->>Human: Reflect, implement feedback if useful.
end
As well as protecting your brain, this approach reduces the risk that your writing ends up seeming generic and AI-produced (because it isn't: your work is your work, just a bit better thanks to an additional reviewer).
This technique of course works best for things the AI knows a lot about. There are probably millions of articles out there on how to create a good homepage for a consultancy website, so the AI had plenty of good ideas when reviewing my design. Your mileage may vary depending on the specific task. Don't switch off your brain after getting the AI feedback: your final piece of critical thinking is to reflect on the AI's comments.
It won't always be possible to use AI this way. If your job insists you use an AI assistant to write or code, you can't avoid it entirely. But as much as possible: do your thinking first, involve the AI second.
Your Product Needs Words