How People Are Actually Using AI in 2025, and What It Means
Large scale usage data from over half dozen academic studies gives us some clarity.
How are people actually using AI tools like ChatGPT, Claude and Gemini? The popular narratives tend to be negative. We hear about students using AI to cheat on assignments, professionals taking intellectual shortcuts (e.g. academics), and a creeping fear that these tools will make us all dumber. While these concerns are valid and deserve discussion, they don’t paint the whole picture. In fact, they might not even be the most interesting part of the story.
To get past the speculation and get some clarity, I looked at recent academic papers and datasets to understand how people are using AI.
The Problem with Just Asking
Before we dive into what the data says, we need to address a major roadblock in understanding AI adoption: people aren't always truthful about their usage. Research based on self-reported surveys often produces wildly different results. A recent study [8] highlights the role of social desirability bias (the tendency for people to answer questions in a way they think will be viewed favorably by others). This is particularly salient for AI use measurement because of the aforementioned stigma (e.g. being associated with cheating or taking a shortcut).
This study found a significant gap between how much students say they use AI and how much they believe their peers use it. While around 60% of students admitted to using AI, they estimated that 90% of their peers did. This suggests that the stigma around AI is real and that self-reports likely underestimate its true prevalence. To get a clearer picture, we need to look at data that goes beyond what people are willing to admit.
The Big Picture: Who's Adopting AI and Why?
Based on millions of real-world traces from Microsoft’s Copilot telemetry (tens of millions of enterprise interactions across knowledge work tasks) [7], Anthropic’s Claude.ai usage dataset (over one million anonymized student conversations, and millions more general usage sessions) [2,6], and a large public dataset of ChatGPT usage logs spanning hundreds of thousands of prompts from diverse global users [8], as well as detailed chat logs from Rutgers students [1] and other academic settings [2], the patterns are remarkably consistent. These behavioral datasets are complemented by nationally representative survey data from the United States [3] and a large-sample survey from Denmark [4], which capture self-reported use in both work and everyday life.
Across these datasets, five usage categories dominate, and the numbers line up strikingly well across very different contexts.
Shaping language.
This is the single largest category in most logs. In the public ChatGPT dataset, roughly 62% of first-turn prompts in English are requests for writing assistance. Anything from drafting marketing copy to rewording an email to adjusting tone for a specific audience. In Microsoft’s Copilot data, “writing” consistently appears as the most frequent goal, spanning corporate reports, slide decks, and correspondence. Among students in Anthropic’s million-conversation sample, 39.3% of education-related chats involve creating or improving written content, including essays, lab reports, and assignment drafts. We found similar stats for students in our ChatGPT dataset [1].
Structuring thought.
Summarization, outlining, and reframing information come next. In the ChatGPT dataset, summarization and “explain in simpler terms” requests make up about 14% of first-turn prompts, with substantial spillover into multi-turn sessions where users iteratively refine outlines. U.S. survey respondents report using AI heavily for “information gathering” and “summarizing” — categories that top the list of workplace uses alongside writing. Copilot logs show similar patterns, with summarization tasks frequently initiated even when the stated goal is different.
Explaining and tutoring.
Educational and conceptual explanation is a major slice of activity, especially for student users. Anthropic’s student dataset finds 33.5% of interactions are focused on getting technical explanations or solutions, whether for computer science, math, or other STEM fields. Rutgers student logs echo this, showing sustained use when AI is used for clarifying concepts or applying theories, often through multi-turn, corrective back-and-forth. The key behavior here is iterative probing, asking the AI to explain again, offer examples, or fix a mistake, rather than taking the first output at face value.
Technical troubleshooting.
While smaller in absolute share than writing, this category is highly concentrated in certain domains. In ChatGPT’s public logs, coding prompts account for around 6% of first turns, but in student datasets, especially in computer science courses, the share increases a lot. Anthropic’s education sample finds computer science accounting for 38.6% of all usage, despite representing only ~5.4% of U.S. degrees. These interactions often mix code generation with debugging and logic correction, showing that troubleshooting is as important as building from scratch.
Creative expansion.
Idea generation, brainstorming, and creative writing make up a smaller share in enterprise contexts but loom larger in public and hobbyist datasets. In ChatGPT’s public logs, about 10–15% of prompts (depending on language) fall into creative or imaginative territory, from writing poetry to plotting stories to inventing marketing slogans. Even in educational contexts, students occasionally tap AI for imaginative framing like reframing a dull assignment in a playful tone, or generating metaphors to explain a concept.
Overall, these patterns show remarkable convergence across diverse disciplines: whether you’re looking at corporate telemetry, opt-in public datasets, or student logs, the same broad buckets appear. The mix shifts depending on context but overall, creative use is stronger outside of work, troubleshooting dominates in technical education, and summarization is core in corporate tools. The underlying categories of AI-human collaboration remains consistent.
What do these findings mean?
So, what does all of this mean? When we weave these threads together, we can see the faint outlines of our new reality.
First, the core behavioral pattern is augmentation, not automation. In enterprise logs, a large share of “user goals” are writing or gathering information, but the model’s own “action” often shifts to coaching, advising, or teaching. That mismatch is the tell: the human keeps ownership of the task while delegating sub-steps like draft, outline, rephrase, propose options, sanity-check. Public usage data and student logs echo the same thing. Even where people ask for an answer, the interaction often becomes iterative: explain → refine → verify → adapt. The practical implication is that AI is becoming a thinking scaffold, a way to compress the cost of getting to a decent first pass and then improving it with judgment.
Second, the productivity story is real but subtle. Current usage intensity suggests single-digit time savings at the worker level, on the order of 1–5% of total work hours in the aggregate with heavier savings for active users. That sounds small until you consider two compounding effects: learning curves (people get faster at delegating the right sub-tasks) and process redesign (teams change what they produce and how they review it). The PC followed a similar arc: early increments, then step-function gains after workflows, norms, and training caught up. If you want bigger impact, you don’t need another model; you need better workflow integration where AI is a default step for drafting, summarizing, or translating, and where verification is just as routine.
Third, we are in a period of cultural tension between stigma and utility. The fear of being seen as "cheating" is real, yet the practical benefits of AI are so compelling that people are using it anyway, often in secret. This tension will likely ease as AI becomes more integrated and our norms evolve, but for now, it shapes how we talk about (and fail to talk about) this technology.
Fourth, especially for academics, education needs to pivot from detection to design. The logs do not support a caricature of students pressing a “cheat” button and walking away. The modal behavior is multi-turn repair and explanation. That’s the teachable moment. Redesign assignments so that AI can help but not carry: personal data or fieldwork that AI can’t fake. Allow AI, require disclosure, and grade for thinking with tools. You’ll get less cat-and-mouse and more learning. This is a different blog post.
Finally, a note on the “we’ll all get dumb” narrative. Over-offloading is a real risk — the data shows users sometimes ask for answers when they’d learn more by wrestling with the problem. But the same data also shows a countervailing behavior: people use AI to raise the challenge level they take on, because the slog of the first draft or the first attempt is cheaper. The danger isn’t the tool; it’s unexamined use. If we teach people to delegate the mechanical parts, deliberate on the judgments, and document what they did and why, we’ll get the best of both worlds: faster throughput and stronger thinking.
The story of how we are using AI is not a simple one. It’s a complex narrative of rapid, unequal adoption, of social stigma, and of a quiet but profound shift in how we learn, work, and create. We are probably (and hopefully) not outsourcing our brains. We are probably developing new ones, forging a partnership with technology that, for better or worse, will define the decades to come. The real challenge is not to fear the machine, but to understand the ghost in it … our own evolving aspirations, curiosities, and limitations, now reflected back at us as AI..
References
[1] Ammari, T., Chen, M., Zaman, S.M.M. and Garimella, K., 2025. How Students (Really) Use ChatGPT: Uncovering Experiences Among Undergraduate Students. arXiv preprint arXiv:2505.24126.
[2] Anthropic, 2025. Anthropic Education Report: How university students use Claude. Available at: https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude.
[3] Bick, A., Blandin, A. and Deming, D.J., 2025. The Rapid Adoption of Generative AI. NBER Working Paper No. 32966.
[4] Humlum, A. and Vestergaard, E., 2024. The unequal adoption of ChatGPT exacerbates existing inequalities among workers. Proceedings of the National Academy of Sciences.
[5] Imas, A. and Ling, Y., 2025. Underreporting of AI use: The role of social desirability bias. University of Chicago, Booth School of Business.
[6] Tamkin, A., et al., 2025. Clio: Privacy-Preserving Insights into Real-World AI Use. Anthropic.
[7] Tomlinson, K., Jaffe, S., Wang, W., Counts, S. and Suri, S., 2025. Working with AI: Measuring the Occupational Implications of Generative AI. arXiv preprint arXiv:2507.07935.
[8] Zhao, W., Ren, X., Hessel, J., Cardie, C., Choi, Y. and Deng, Y., 2024. WILDCHAT: 1M CHATGPT INTERACTION LOGS IN THE WILD. In: International Conference on Learning Representations.