• About
  • Contact
  • Advertise
  • Subscribe
  • For Authors
  • Privacy Policy
  • Support the Behavioral Times
The Behavioral Times | Insights into the psychology of everyday behavior
  • About
  • Contact
  • Advertise
  • Subscribe
  • For Authors
  • Privacy Policy
  • Support the Behavioral Times
Advertisement
No Result
View All Result
  • About
  • Contact
  • Advertise
  • Subscribe
  • For Authors
  • Privacy Policy
  • Support the Behavioral Times
No Result
View All Result
The Behavioral Times | Insights into the psychology of everyday behavior
No Result
View All Result
Home AI & Digital behavior

Does AI Make You Stupid? Here’s What the Research on AI and Critical Thinking Actually Shows

Florien Cramwinckel by Florien Cramwinckel
30/03/2026
in AI & Digital behavior
Reading Time: 7 mins read
0
Alt-tekst (SEO + toegankelijkheid, beschrijvend en met keyphrase) Person with Southeast Asian appearance looking at a smartphone with a distracted, vacant expression — illustrating reduced critical thinking when using AI
whatsapplinkedinemail

Does using AI make you think less critically? A large-scale study of 319 knowledge workers found that higher confidence in AI is associated with less critical thinking — while higher confidence in yourself is associated with more. Here's what the research shows, and what you can do about it.


📌 TL;DR

  • A survey of 319 knowledge workers (936 real work examples) found that higher confidence in AI is associated with less critical thinking — even when people are aware of this risk.
  • AI doesn't eliminate critical thinking. It shifts it: from gathering information to verifying it, from solving problems to integrating AI output, from executing tasks to stewarding them.
  • The strongest protective factor against over-reliance: confidence in your own domain expertise.
  • The practical implication: how you prompt AI matters as much as whether you use it. Asking AI to challenge you rather than serve you changes the dynamic entirely.

Is the frustration with AI-generated content justified?

Yes — and research now gives that frustration an empirical basis. Everyone seems to be reacting to the wildgrowth of AI-generated content, like the default build-up in many LinkedIn posts. It feels fake, and it might also just make us think less over time. As if not thinking yourself about what you throw into the world gradually erodes your ability to do so.

Research from Microsoft Research and Carnegie Mellon University surveyed 319 knowledge workers and found that risk is real. The more confidence you have in AI for a specific task, the more likely you are to accept its output without critical engagement. Not because people are lazy — but because they can. We also know from a previous article on the AI disclosure penalty that audiences are already picking up on this shift — and evaluating AI-involved content differently because of it.


Does AI reduce critical thinking?

Yes — but the mechanism is more specific than you might expect. Lee et al. (2025) found that task-level confidence in AI, not general trust in AI as a technology, predicts reduced critical thinking. When someone believed AI was particularly capable of handling this specific task, they were significantly less likely to engage critically with the output (β = −0.69, p < .001).

In practice, this means: you take the output, assume it's good, and move on. You don't check. And we all know AI makes mistakes — sometimes subtle ones, sometimes significant ones. So that's not necessarily a great habit to develop.

But it is a very human one. BJ Fogg's behavioral model tells us that when a behavior becomes easier, we do it more. This is actually the core principle behind effective behavior change interventions: remove friction from the behavior you want more of, and people will do it more. It works beautifully for exercise habits or saving money. The flip side — illustrated clearly by this research — is that when AI removes the friction from thinking, we do less of it. The tool doesn't make us stupid. It just makes not-thinking-carefully the path of least resistance. You can read more about why habits are so hard to break — and why this kind of effortless reliance is particularly sticky once it sets in.


How does your thinking change when you use AI?

Your brain doesn't switch off — it shifts into a different mode. Lee et al. (2025) identify three consistent changes in where cognitive effort goes when knowledge workers use AI tools at work.

1. From gathering information → to verifying it. AI retrieves and organises information faster than any human. But it also makes things up — presenting invented facts with the same confident tone as accurate ones. The cognitive burden doesn't disappear; it moves. Instead of spending time finding information, you now need to spend time checking whether it's actually true. Workers using AI for research reported needing more effort for verification, not less.

2. From solving problems → to fitting the AI's answer into your reality. AI can apply knowledge to new situations and generate solutions. But the output is rarely a perfect fit. What changes is that you're no longer solving the original problem yourself — you're figuring out how to adapt someone else's answer to your specific context, constraints, and audience. Less original problem-solving, more puzzle-fitting around the edges of what AI produced.

3. From doing the work → to managing the work. This is the most consequential shift. As AI handles more of the actual production, your role moves toward oversight: directing AI, monitoring its outputs, and staying accountable for the result. Lee et al. (2025) call this "task stewardship."

Think of what happens in many organizations when someone who's excellent at their craft gets promoted to management. Suddenly they're no longer doing the work — they're overseeing someone else doing it. And being good at the content doesn't automatically make you good at managing the content. Those are different skills, and they need to be trained deliberately. The same logic applies here. Using AI well isn't a natural extension of being good at your job. It's a new skill set — and one that many people are picking up entirely on the fly.


What protects your critical thinking when using AI?

The strongest protective factor is confidence in your own domain expertise. Workers who were confident in their ability to do a task without AI were significantly more likely to think critically when using it (β = 0.26, p = .026, Lee et al., 2025).

Think of the calculator. Everyone has access to one. And yes, calculators make arithmetic faster for everyone. But someone who genuinely understands mathematics uses a calculator to solve more complex problems faster — and can tell when the output doesn't look right. Someone who doesn't understand the underlying math just gets wrong answers more quickly, with full confidence in those answers.

AI may work the same way — and the gap it creates might be larger, not smaller. People who deeply understand their domain use AI to think faster, go further, and produce sharper work. People who lack that foundation use AI to generate more output — but have no reliable way to evaluate whether it's any good. If anything, access to AI may widen the performance gap between those who can genuinely engage with it and those who can't. Everyone gets faster. Not everyone gets better.

The organizational implication is uncomfortable: deploying AI broadly before your team has built foundational expertise may accelerate output while eroding the judgment that makes output good. An incompetent manager overseeing a lazy executor is a bad combination — but it's essentially what you get when low domain confidence meets high AI trust. Fast and wrong at scale is worse than slow and right.


How can you use AI without thinking less?

The answer is to deliberately reintroduce friction — not to slow down, but to stay engaged. If the problem is that AI lowers the effort of thinking to the point where we stop doing it, the solution is to change what you ask AI to do.

Instead of "write this for me," ask it to challenge your reasoning, identify weaknesses in your argument, or steelman the opposing view. This reframes AI from an output machine into a thinking partner — and keeps your own analytical engagement in the loop.

I've developed a masterprompt that operationalises exactly this: a structured way of interacting with AI that keeps your own judgment central rather than peripheral. You can download it at floriencramwinckel.nl/ai_masterprompt.

The underlying logic is the same as cookie banner design: AI is built to make "accept" the path of least resistance. A masterprompt is a behavioral design intervention — it sets up your interaction so that critical engagement, not passive acceptance, is the default.


What does this research not yet tell us?

The Lee et al. (2025) study has real limitations worth naming. All data are self-reported: participants described their own critical thinking rather than having it objectively measured. The researchers acknowledge that workers sometimes conflated "AI made this easier" with "I thought less critically" — which aren't the same thing.

The sample skews young (71% under 35), English-speaking, and toward high AI-adoption occupations. Whether these effects hold in older, less tech-oriented populations — or in cultures with different relationships to technology and authority — is unknown.

There's also a deeper open question: will the effect persist as AI use becomes normal? People may recalibrate their scrutiny over time. Or the opposite — normalisation may further erode the sense that checking is necessary. The mechanisms the researchers identify — confidence calibration, habit formation, cognitive offloading — are well-documented in behavioral science. That gives the findings weight beyond this specific sample. But the long-run direction remains open.


📚 References

  • Lee, H.-P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems.

🔗 Related articles on The Behavioral Times

  • The AI Disclosure Penalty: Why telling people AI was involved lowers how they evaluate the work
  • Why are habits so hard to break? — The same mechanism that makes AI reliance sticky
  • How can you close the gap between wanting and doing? (The intention–behavior gap)

Is your organization using AI — but producing outputs nobody really owns?

That's a behavioral pattern, not a tool problem. I'm Florien Cramwinckel, an independent behavioral strategist. My work goes from BS to BS — from bullshit to behavioral strategy. I help organizations find where behavior gets in the way of results, and redesign it.

Curious what that looks like for your team? Let's talk →

Tags: AIautomaticitybehavior changecritical thinkinggenerative AIhabitsknowledge workers
SendShareSend
Previous Post

Too good to be written by you? The AI disclosure penalty

Florien Cramwinckel

Florien Cramwinckel

I’m a behavioral scientist, writer and speaker with a deep interest in human behavior — from money and decision-making to climate, AI, identity, and everyday habits. I translate research into sharp, accessible insights that help us understand not just how we act, but why. Expect nuance, evidence, and a touch of playfulness.

Stay Connected test

  • 23.9k Followers
  • 99 Subscribers
  • Trending
  • Comments
  • Latest
A classroom scene showing a Latinx non-binary teacher giving an enthusiastic explanation while a Black and an asian student look at their phone, disengaged — illustrating how behavior change interventions that rely on information alone often fail to change actual behavior

Stop throwing information at people

06/12/2025
Hyperrealistic illustration of a Latina woman on a cliff labeled “Intention,” trying to build a bridge across a gap toward another cliff labeled “Behavior.”

How can you close the intention-behavior gap?

08/02/2026
Illustration showing temptation bundling: person on treadmill enjoying entertainment, compared to just exercising or just indulging. Shows how combining “want” with “should” increases workout motivation.

Temptation Bundling: Should You Watch Netflix at the Gym?

06/12/2025
Illustration of two people reacting to loan reminders — one calm and on time, one stressed by missed payments

Does a simple message reduce missed loan payments? A 13-million-person study says yes

06/12/2025
A classroom scene showing a Latinx non-binary teacher giving an enthusiastic explanation while a Black and an asian student look at their phone, disengaged — illustrating how behavior change interventions that rely on information alone often fail to change actual behavior

Stop throwing information at people

6
Hyperrealistic illustration of a Latina woman on a cliff labeled “Intention,” trying to build a bridge across a gap toward another cliff labeled “Behavior.”

How can you close the intention-behavior gap?

4
Why are habits hard to break? Don’t rely on willpower. Psychology shows habits change when the context shifts — not when you try harder.

Why habits are hard to break — even when we desperately want to

3
Abstract visual showing link between steps and money — paying people to exercise

Paying People to Exercise: Should You Pay People to Go to the Gym?

1
Alt-tekst (SEO + toegankelijkheid, beschrijvend en met keyphrase) Person with Southeast Asian appearance looking at a smartphone with a distracted, vacant expression — illustrating reduced critical thinking when using AI

Does AI Make You Stupid? Here’s What the Research on AI and Critical Thinking Actually Shows

30/03/2026
the AI disclosure penalty: Hyperrealistic illustration showing diverse readers reacting to AI disclosure and perceived authenticity in writing

Too good to be written by you? The AI disclosure penalty

30/03/2026
Do compliments make people lazy? Split-image showing how timing of praise influences performance and motivation.

Do Compliments Make People Lazy? The Science of Praise and Performance

18/02/2026
A young woman of color imagines her future retirement — dreaming of relaxing on a beach and working in a garden by her campervan.

How do you get people to care about boring topics such as pensions?

08/02/2026

Recent News

Alt-tekst (SEO + toegankelijkheid, beschrijvend en met keyphrase) Person with Southeast Asian appearance looking at a smartphone with a distracted, vacant expression — illustrating reduced critical thinking when using AI

Does AI Make You Stupid? Here’s What the Research on AI and Critical Thinking Actually Shows

30/03/2026
the AI disclosure penalty: Hyperrealistic illustration showing diverse readers reacting to AI disclosure and perceived authenticity in writing

Too good to be written by you? The AI disclosure penalty

30/03/2026
Do compliments make people lazy? Split-image showing how timing of praise influences performance and motivation.

Do Compliments Make People Lazy? The Science of Praise and Performance

18/02/2026
A young woman of color imagines her future retirement — dreaming of relaxing on a beach and working in a garden by her campervan.

How do you get people to care about boring topics such as pensions?

08/02/2026
The Behavioral Times | Insights into the psychology of everyday behavior

The Behavioral Times offers science-backed insights into the psychology of everyday behavior — from habits and health to finance, AI, and social dynamics. Practical, clear, and always grounded in research.

The Behavioral Times is written to make behavioral science accessible to everyone — free from paywalls. Each article takes hours of research, reflection, and writing. Your contribution helps keep this work independent and available to all.

Browse by Category

  • AI & Digital behavior
  • Behavior at Work
  • Behavioral Foundations
  • Brain, Body & Mind
  • Everyday Habits
  • Money & Behavior
  • Social Behavior
  • Sustainability & Moral Behavior

Recent News

Alt-tekst (SEO + toegankelijkheid, beschrijvend en met keyphrase) Person with Southeast Asian appearance looking at a smartphone with a distracted, vacant expression — illustrating reduced critical thinking when using AI

Does AI Make You Stupid? Here’s What the Research on AI and Critical Thinking Actually Shows

30/03/2026
the AI disclosure penalty: Hyperrealistic illustration showing diverse readers reacting to AI disclosure and perceived authenticity in writing

Too good to be written by you? The AI disclosure penalty

30/03/2026

Sign up to the Behavioral Times

Discover how behavioral science explains the everyday — from habits and health to finance, AI, and social decisions.

Clear, practical insights delivered straight to your inbox — no jargon, no spam.

Join now. It’s free.

📌 This newsletter is in Dutch. Please only subscribe if you read Dutch.

  • About
  • Contact
  • Advertise
  • Subscribe
  • For Authors
  • Privacy Policy
  • Support the Behavioral Times

© 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.

No Result
View All Result
  • About
  • Contact
  • Advertise
  • Subscribe
  • For Authors
  • Privacy Policy
  • Support the Behavioral Times

© 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.