Does using AI make you think less critically? A large-scale study of 319 knowledge workers found that higher confidence in AI is associated with less critical thinking — while higher confidence in yourself is associated with more. Here's what the research shows, and what you can do about it.
📌 TL;DR
- A survey of 319 knowledge workers (936 real work examples) found that higher confidence in AI is associated with less critical thinking — even when people are aware of this risk.
- AI doesn't eliminate critical thinking. It shifts it: from gathering information to verifying it, from solving problems to integrating AI output, from executing tasks to stewarding them.
- The strongest protective factor against over-reliance: confidence in your own domain expertise.
- The practical implication: how you prompt AI matters as much as whether you use it. Asking AI to challenge you rather than serve you changes the dynamic entirely.
Is the frustration with AI-generated content justified?
Yes — and research now gives that frustration an empirical basis. Everyone seems to be reacting to the wildgrowth of AI-generated content, like the default build-up in many LinkedIn posts. It feels fake, and it might also just make us think less over time. As if not thinking yourself about what you throw into the world gradually erodes your ability to do so.
Research from Microsoft Research and Carnegie Mellon University surveyed 319 knowledge workers and found that risk is real. The more confidence you have in AI for a specific task, the more likely you are to accept its output without critical engagement. Not because people are lazy — but because they can. We also know from a previous article on the AI disclosure penalty that audiences are already picking up on this shift — and evaluating AI-involved content differently because of it.
Does AI reduce critical thinking?
Yes — but the mechanism is more specific than you might expect. Lee et al. (2025) found that task-level confidence in AI, not general trust in AI as a technology, predicts reduced critical thinking. When someone believed AI was particularly capable of handling this specific task, they were significantly less likely to engage critically with the output (β = −0.69, p < .001).
In practice, this means: you take the output, assume it's good, and move on. You don't check. And we all know AI makes mistakes — sometimes subtle ones, sometimes significant ones. So that's not necessarily a great habit to develop.
But it is a very human one. BJ Fogg's behavioral model tells us that when a behavior becomes easier, we do it more. This is actually the core principle behind effective behavior change interventions: remove friction from the behavior you want more of, and people will do it more. It works beautifully for exercise habits or saving money. The flip side — illustrated clearly by this research — is that when AI removes the friction from thinking, we do less of it. The tool doesn't make us stupid. It just makes not-thinking-carefully the path of least resistance. You can read more about why habits are so hard to break — and why this kind of effortless reliance is particularly sticky once it sets in.
How does your thinking change when you use AI?
Your brain doesn't switch off — it shifts into a different mode. Lee et al. (2025) identify three consistent changes in where cognitive effort goes when knowledge workers use AI tools at work.
1. From gathering information → to verifying it. AI retrieves and organises information faster than any human. But it also makes things up — presenting invented facts with the same confident tone as accurate ones. The cognitive burden doesn't disappear; it moves. Instead of spending time finding information, you now need to spend time checking whether it's actually true. Workers using AI for research reported needing more effort for verification, not less.
2. From solving problems → to fitting the AI's answer into your reality. AI can apply knowledge to new situations and generate solutions. But the output is rarely a perfect fit. What changes is that you're no longer solving the original problem yourself — you're figuring out how to adapt someone else's answer to your specific context, constraints, and audience. Less original problem-solving, more puzzle-fitting around the edges of what AI produced.
3. From doing the work → to managing the work. This is the most consequential shift. As AI handles more of the actual production, your role moves toward oversight: directing AI, monitoring its outputs, and staying accountable for the result. Lee et al. (2025) call this "task stewardship."
Think of what happens in many organizations when someone who's excellent at their craft gets promoted to management. Suddenly they're no longer doing the work — they're overseeing someone else doing it. And being good at the content doesn't automatically make you good at managing the content. Those are different skills, and they need to be trained deliberately. The same logic applies here. Using AI well isn't a natural extension of being good at your job. It's a new skill set — and one that many people are picking up entirely on the fly.
What protects your critical thinking when using AI?
The strongest protective factor is confidence in your own domain expertise. Workers who were confident in their ability to do a task without AI were significantly more likely to think critically when using it (β = 0.26, p = .026, Lee et al., 2025).
Think of the calculator. Everyone has access to one. And yes, calculators make arithmetic faster for everyone. But someone who genuinely understands mathematics uses a calculator to solve more complex problems faster — and can tell when the output doesn't look right. Someone who doesn't understand the underlying math just gets wrong answers more quickly, with full confidence in those answers.
AI may work the same way — and the gap it creates might be larger, not smaller. People who deeply understand their domain use AI to think faster, go further, and produce sharper work. People who lack that foundation use AI to generate more output — but have no reliable way to evaluate whether it's any good. If anything, access to AI may widen the performance gap between those who can genuinely engage with it and those who can't. Everyone gets faster. Not everyone gets better.
The organizational implication is uncomfortable: deploying AI broadly before your team has built foundational expertise may accelerate output while eroding the judgment that makes output good. An incompetent manager overseeing a lazy executor is a bad combination — but it's essentially what you get when low domain confidence meets high AI trust. Fast and wrong at scale is worse than slow and right.
How can you use AI without thinking less?
The answer is to deliberately reintroduce friction — not to slow down, but to stay engaged. If the problem is that AI lowers the effort of thinking to the point where we stop doing it, the solution is to change what you ask AI to do.
Instead of "write this for me," ask it to challenge your reasoning, identify weaknesses in your argument, or steelman the opposing view. This reframes AI from an output machine into a thinking partner — and keeps your own analytical engagement in the loop.
I've developed a masterprompt that operationalises exactly this: a structured way of interacting with AI that keeps your own judgment central rather than peripheral. You can download it at floriencramwinckel.nl/ai_masterprompt.
The underlying logic is the same as cookie banner design: AI is built to make "accept" the path of least resistance. A masterprompt is a behavioral design intervention — it sets up your interaction so that critical engagement, not passive acceptance, is the default.
What does this research not yet tell us?
The Lee et al. (2025) study has real limitations worth naming. All data are self-reported: participants described their own critical thinking rather than having it objectively measured. The researchers acknowledge that workers sometimes conflated "AI made this easier" with "I thought less critically" — which aren't the same thing.
The sample skews young (71% under 35), English-speaking, and toward high AI-adoption occupations. Whether these effects hold in older, less tech-oriented populations — or in cultures with different relationships to technology and authority — is unknown.
There's also a deeper open question: will the effect persist as AI use becomes normal? People may recalibrate their scrutiny over time. Or the opposite — normalisation may further erode the sense that checking is necessary. The mechanisms the researchers identify — confidence calibration, habit formation, cognitive offloading — are well-documented in behavioral science. That gives the findings weight beyond this specific sample. But the long-run direction remains open.
📚 References
- Lee, H.-P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems.
🔗 Related articles on The Behavioral Times
- The AI Disclosure Penalty: Why telling people AI was involved lowers how they evaluate the work
- Why are habits so hard to break? — The same mechanism that makes AI reliance sticky
- How can you close the gap between wanting and doing? (The intention–behavior gap)
Is your organization using AI — but producing outputs nobody really owns?
That's a behavioral pattern, not a tool problem. I'm Florien Cramwinckel, an independent behavioral strategist. My work goes from BS to BS — from bullshit to behavioral strategy. I help organizations find where behavior gets in the way of results, and redesign it.
Curious what that looks like for your team? Let's talk →











