AI and Critical Thinking in Education
Brief Overview
The integration of AI into educational settings is a rapidly evolving trend with significant implications for learners' critical thinking skills. AI holds considerable promise for enhancing educational experiences through personalized learning and for cultivating deeper analysis, yet overreliance on AI tools may hinder independent reasoning and reduce active engagement.
Transparency in approach is essential for student success. Whether an instructor chooses to embrace AI integration, restrict its use, or adopt a balanced middle ground, students benefit from understanding the reasoning behind these decisions and how they connect to course learning outcomes. Instructors should consider clearly communicating their AI policies both in class discussions and within their syllabi, ensuring students understand both the expectations and the rationale behind the course policies. Clear communication helps students align their learning strategies with course expectations and develop awareness about when and how different tools serve their educational goals.
Overall, the interplay between AI applications and learners' critical thinking skills is nuanced. Careful consideration is needed to mitigate risks associated with dependence on these technologies, necessitating a balanced approach that encourages active engagement and reflective thinking. Educators are therefore encouraged to cultivate an educational culture that not only utilizes AI effectively but also emphasizes the importance of traditional critical thinking skills, integrating AI tools to complement traditional learning methods rather than replace them.
Fostering Critical Analysis Through AI
How AI Promotes Critical Thinking
When implemented strategically, AI can transform from a simple answer-generating tool into a sophisticated thinking partner that challenges learners to engage more rigorously with complex ideas and develop stronger reasoning skills.
Several studies indicate that the use of AI tools can enhance critical thinking abilities in specific contexts. For instance, Liu and Wang conducted an intervention study demonstrating that Chinese EFL learners showed significant improvement in critical thinking when using AI tools in literature classes, suggesting these tools facilitate critical cognitive processes through personalized feedback and scaffolding (Liu & Wang, 2024). Similarly, Guo and Wang highlighted that continuous monitoring and real-time feedback from AI systems keep learners engaged and encourage them to consider various perspectives, promoting critical thinking(Guo & Wang, 2024). Moreover, tailored learning experiences enabled by AI have been found to strengthen learners' analytical and problem-solving skills (Walter, 2024).
Scaffolded Questioning and Debate Generation
AI excels at generating thought-provoking questions, counterarguments, and debate prompts that learners can critically evaluate and refine, driving analysis beyond surface-level responses. This approach challenges learners to consider multiple perspectives, identify gaps in reasoning, and strengthen their analytical frameworks through structured intellectual engagement. The key to successful implementation lies in designing AI prompts that progressively increase complexity while requiring learners to justify their reasoning at each step.
- In a history seminar, AI proposes three conflicting causes for the French Revolution; learners evaluate each, identify missing social factors, and refine the list to include class dynamics and economic pressures. This process forces learners to move beyond memorization toward synthesis and critical evaluation of historical evidence.
- In a management course, a custom GPT presents a simulated case study of cross-functional team conflict and prompts, "Should the project manager escalate the issue to senior leadership or implement a team-building intervention first?" Learners critique these options, adjust the debate prompts to explore stakeholder impacts, and deepen their strategic analysis by considering multiple organizational perspectives.
Real-Time Adaptive Feedback Systems
Intelligent tutoring systems can provide hints and challenges tailored to individual learner responses, encouraging justification and revision while maintaining cognitive engagement. This personalized feedback approach ensures that learners remain actively involved in the problem-solving process rather than passively receiving solutions. The adaptive nature of AI feedback allows for differentiated instruction that meets learners at their current level while pushing them toward more sophisticated reasoning.
- In an engineering mechanics course, AI observes a learner's incorrect free-body diagram and prompts, "What force did you omit? Consider the tension in the cable," guiding them to add the missing vector and justify its direction. This targeted intervention maintains the learner's agency in problem-solving while providing just enough support to facilitate learning.
- In a foreign language writing workshop, a custom GPT acting as a writing coach reviews a Spanish essay draft in real time—highlighting misuse of subjunctive verbs, suggesting alternative sentence structures, and challenging the writer to justify their stylistic choices before accepting corrections. This approach develops both language skills and critical evaluation of feedback.
Personalized Learning Path Development
AI's capacity to tailor tasks and examples to individual proficiency levels allows learners to be consistently challenged just beyond their comfort zones while maintaining engagement through relevant, contextualized content. This personalization prevents both boredom from tasks that are too easy and frustration from content that is too difficult, creating optimal conditions for critical thinking development.
- In language learning, the AI assesses a learner's use of past-tense verbs and offers exercises increasingly complex—first regular verbs, then irregular verbs—ensuring mastery before advancing to more sophisticated grammatical structures. This scaffolded progression builds confidence while maintaining appropriate cognitive challenge.
- In a probability & statistics module, the AI begins by asking learners about their interests or hobbies (e.g., "Do you enjoy basketball or video games?"), then generates probability problems using basketball free-throw scenarios or game loot-drop rates, making abstract concepts more relatable and engaging while requiring rigorous mathematical analysis.
Structured Reflection and Metacognitive Development (Human-in-the-loop)
AI can facilitate deeper metacognitive engagement by requiring learners to examine not only AI outputs but also their own thinking processes, assumptions, and analytical approaches. This human-in-the-loop verification ensures that learners maintain critical agency while benefiting from AI's capabilities. The reflection component is essential for developing transferable critical thinking skills that extend beyond specific AI interactions.
- Learners critically engage with AI-generated content to check sources, justify arguments, and ensure factual accuracy.
- Learners document their AI interactions by providing transcripts or links and reflect on the experience through a written report.
- Alternatively, learners could create a video presentation, mind map, or another medium of their choice to explain their verification process, illustrate connections to relevant theories or frameworks, and highlight how they identified and addressed any hidden assumptions.
Strategies for Preventing AI-Related Critical Thinking Decline
While AI offers significant educational benefits, improper implementation can undermine critical thinking development. However, concerns have emerged regarding potential drawbacks of AI integration in education. Some researchers argue that reliance on AI may foster superficial engagement, with learners depending more on these technologies for data generation and idea formulation rather than developing their analytical skills. For instance, Chan noted that the use of generative AI might lead to a decline in writing and critical thinking skills due to over-reliance on automated tools (Chan, 2023). Additional concerns regarding AI-enhanced learning include the risk of oversimplifying tasks and compromising the depth of critical analysis. Another critical concern relates to the ethical implications of AI in education. Hading et al. argued that while AI can support basic comprehension, it could inadvertently promote a passive learning approach, leading learners to prioritize ease of information access over critical evaluation (Hading et al., 2024). This perspective is echoed by Pratiwi et al., who noted that excessive facilitation provided by AI might diminish learners' critical engagement with content (Pratiwi et al., 2025). These research-based strategies help educators identify and address common pitfalls that lead to intellectual dependency and superficial engagement with complex ideas.
Addressing Unsupervised AI Use and Academic Shortcuts
Learners sometimes use AI as a "shortcut," generating solutions without analysis and engaging superficially with content. This approach bypasses the cognitive struggle necessary for deep learning and prevents the development of essential analytical skills.
Problem:
In a calculus class, a learner pastes a derivative problem into AI and copies the result directly into their homework, never working through the limit definition themselves or understanding the underlying concept. This behavior eliminates the mathematical reasoning process essential for conceptual mastery.
Strategy: AI Literacy and Prompt Library Development
Teach learners how to frame productive prompts that guide learning rather than provide answers. Provide a shared library of scaffolded prompts for derivative problems that require step-by-step reasoning. Develop custom AI GPTs that guide learners through problem-solving processes rather than delivering final answers. This approach maintains cognitive engagement while leveraging AI's supportive capabilities.
Combating Task Oversimplification and Surface-Level Analysis
AI's tendency to reduce complex problems to quick answers can bypass the cognitive struggle needed for deep critical analysis, preventing learners from grappling with nuance and developing sophisticated analytical skills.
Problem:
In a philosophy course, a learner asks AI for a summary of Kant's categorical imperative and accepts the brief bullet points, missing the nuance of autonomy, universality, and moral law that emerges through careful reading and debate. The learner fails to engage with the philosophical complexity essential for understanding ethical reasoning.
Strategy: Scaffolded AI Debrief Protocols
Require learners to expand AI summaries into detailed analyses that demonstrate deep engagement with complex ideas. Provide examples contrasting brief AI summaries with comprehensive analyses. Develop follow-up questions that help learners reflect on the limitations of simplified explanations and the importance of grappling with complexity. Create assignments that require learners to identify what AI summaries miss and why deeper engagement is necessary.
Preventing Cognitive Offloading and Intellectual Dependency
Over-reliance on AI for brainstorming, recall, and analysis can erode independent reasoning and long-term knowledge retention, creating intellectual dependency that undermines learners' confidence in autonomous thinking.
Problem:
In a biology assignment, learners repeatedly ask AI to list steps of the Krebs cycle instead of memorizing and drawing the cycle themselves, weakening their ability to recall and explain it in future assessments or lab discussions. This pattern prevents consolidation of the foundational knowledge necessary for advanced biological understanding.
Strategy: Hybrid Brainstorming and Reflection Requirements
Mandate initial "no-AI" mind-maps or sketches of biological processes, then allow AI to refine and enhance learner work. Require reflection logs comparing learner-generated content to AI input, noting what new insights emerged versus what was already known. Create assignments that alternate between independent work and AI-assisted analysis to maintain cognitive independence. Implement regular "AI-free" assessments to ensure learners can function autonomously.
Overcoming AI Avoidance and Building Digital Literacy Confidence
Fear of AI errors or bias can foster complete avoidance of AI tools, creating missed opportunities for developing essential digital literacy and critical evaluation skills that learners need in contemporary academic and professional contexts.
Problem:
In a history project on Cold War events, learners avoid using AI because they worry it will produce inaccurate dates, choosing instead to copy from other sources rather than critically verifying facts themselves. This approach eliminates opportunities to develop crucial source evaluation and fact-checking capabilities.
Strategy: Safe AI Playground and Bias Audit Training
Host a low-stakes AI experimentation session where learners share both accurate and flawed AI outputs without grade consequences. Pair this with an "AI Bias Audit" assignment where learners practice identifying inaccuracies and developing verification techniques using trusted references. Create structured activities that build confidence in critical evaluation while teaching essential digital literacy skills. Provide frameworks for systematic fact-checking and source verification that learners can apply across disciplines. The WMU Libraries have resources to support this strategy, including an Information Literacy Microcourse.
Enas Aref is a former AI Graduate Fellow with the WMU Office of Faculty Development and a doctoral instructor in the Industrial & Entrepreneurial Engineering & Engineering Management Department at WMU, and current Assistant Teaching Professor of Management & Technology in the College of Engineering and Innovation at Bowling Green State University. Research interests include Ergonomics and Human Factors, STEM Education, Artificial Intelligence, User Experience (UX), and Engineering Management.
References
Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education, 20(1), 38. https://doi.org/10.1186/s41239-023-00408-3
Guo, Y., & Wang, Y. (2024). Exploring the Effects of Artificial Intelligence Application on EFL Students’ Academic Engagement and Emotional Experiences: A Mixed‐Methods Study. European Journal of Education. https://doi.org/10.1111/ejed.12812
Hading, E. F., Hardianto Rustan, D. R., & Ruing, F. H. (2024). EFL Students’ Perceptions on the Integration of AI in Fostering Critical Thinking Skills. Glens. https://doi.org/10.61220/glens.v2i1.466
Liu, W., & Wang, Y. (2024). The Effects of Using Tools on Critical Thinking in English Literature Classes Among Learners: An Intervention Study. European Journal of Education. https://doi.org/10.1111/ejed.12804
Pratiwi, H., Suherman, S., Hasruddin, & Ridha, M. (2025). Between Shortcut and Ethics: Navigating the Use of Artificial Intelligence in Academic Writing Among Indonesian Doctoral Students. European Journal of Education. https://doi.org/10.1111/ejed.70083
Walter, Y. (2024). Embracing the future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1), 15. https://doi.org/10.1186/s41239-024-00448-3