How Much AI is Too Much?
We really need to think of measuring over dependence. I write about my recent proposal on this topic.
Artificial intelligence is everywhere in a student's life. Our recent paper, based on real world usage data [1] shows how much students depend on it and what all they do with it (almost everything academic). It's the chatbot that helps them brainstorm an essay and the algorithm that curates their social media feed. This new reality offers incredible tools for learning and creativity, but it also presents a serious challenge. How do we embrace the power of AI without losing the critical thinking and emotional skills that college is supposed to build?
This is the central question behind a new research proposal I just submitted aimed at understanding and improving our relationship with AI. The project introduces a groundbreaking framework called "Calibrated Dependence," which seeks to find the "sweet spot" where AI helps us without hurting us.
The Double-Edged Sword of AI
There’s a lot of positives to generative AI in a student’s life. But recent research [2] also shows there's a flip side. What happens when we rely too much on these tools? Researchers are concerned about "metacognitive laziness," where students may produce polished work without deeply engaging with the material, ultimately learning less [3]. There's a risk of outsourcing not just the tedious work, but the "desirable difficulties" that are crucial for building strong critical thinking skills.
This challenge isn't just academic. The perfectly curated, often AI-generated, images/videos we see on social media can intensify social comparison, which is linked to body dissatisfaction and other negative mental health outcomes. This may not be happening today but will definitely be a reality in 5 years.
Finding the Balance: The "Calibrated Dependence" Framework
Right now, we're often stuck between two extremes: banning AI altogether or embracing it without question. This research proposes a more thoughtful path forward. The "Calibrated Dependence" framework is built on a simple but powerful idea: the relationship between AI assistance and its overall benefit follows an inverted-U curve.
Let’s use the figure above as reference. As you increase the level of AI assistance (x-axis), the utility, or benefit, goes up. But at the same time, a cost to your autonomy (say your ability to think for yourself, the red line in the figure) also rises. We can define a CD-Score as the difference in utility and the autonomy and it takes the shape of an inverted U curve. The goal is to find the peak of that curve, the "optimal point" where you get the most benefit without sacrificing your independence.
To figure out where this "sweet spot" lies, the project will look at two key areas of student life:
Academic Problem-Solving: Where the main goal is to use AI to learn effectively without losing critical thinking skills.
Social Media & Body Image: Where the focus is on how AI-driven content affects well-being and self-perception.
What This Means for Students, Educators, and a Better Future with AI
This research is about more than just studying a problem; it's about finding solutions. By understanding how students are really using AI, both actively and passively, the project aims to develop a "Calibrated Dependence Coach." This isn't just another AI tool. It's a set of smart nudges and prompts designed to help users think more critically about how they're using AI.
For example, it might pause and ask a student to reflect on why an AI suggestion improves their work, or it might gently remind a social media user that an image is AI-generated and ask how it makes them feel.
The ultimate goal is to create evidence-based guidelines for designing more responsible and helpful AI. This will empower students to make informed decisions about their AI use, and it will give educators and developers the tools they need to build a digital world that enhances our abilities, rather than diminishing them. This research is a crucial step toward ensuring that as AI becomes more integrated into our lives, it serves to amplify our humanity, not replace it.
[1] T. Ammari, M. Chen, S. Zaman, and K. Garimella, “How students (really) use chatgpt: Uncovering experiences among undergraduate students,” arXiv preprint arXiv:2505.24126, 2025.
[2] H. Bastani, O. Bastani, A. Sungu, H. Ge, O. Kabakcı, and R. Mariman, “Generative AI can harm learning,” PNAS, vol. 4895486, 2025.
[3] Y. Fan, L. Tang, H. Le, K. Shen, S. Tan, Y. Zhao, Y. Shen, X. Li, and D. Gaˇsevi´c, “Beware of metacognitive laziness: E!ects of generative artificial intelligence on learning motivation, processes, and performance,” British Journal of Educational Technology, vol. 56, no. 2, pp. 489–530, 2025.