Anthropic’s New AI Index Shows What Sets Top AI Users Apart
By Dan Fitzpatrick
Anthropic’s new data reveals the specific behaviors that separate effective AI users from passive ones, with direct implications for how we prepare students.
The central finding from Anthropic’s new AI Fluency Index is that the better the output looks, the less people question it. Published this week, the AI Fluency Index is the first large-scale attempt to measure not just how people use AI, but how well they use it. Researchers at the AI company analyzed 9,830 conversations on Claude AI during a single week in January 2026, tracking 11 specific behaviors that represent effective human-AI collaboration. What they found should make us all think about AI literacy.Researchers at the AI company analyzed 9,830 conversations on Claude during a single week in January 2026. In their research, they tracked 11 specific behaviors representing effective human AI collaboration.The framework they used was created by professors Rick Dakan and Joseph Feller in collaboration with Anthropic. The tracked behaviors included clarifying goals, providing examples of good output, and questioning AI reasoning.Most People Already IterateThey found that the most common behaviors exhibited were iteration and refinement. 85.7 percent of conversations showed users building on previous exchanges rather than accepting the first response. This means most people are not treating AI as a vending machine where they write a prompt and accept whatever comes out. They are treating it as a work in progress. A session that is collaborative between human and machine.Conversations with iteration showed roughly double the rate of other fluency behaviors compared to those without. Users who iterated were 5.6 times more likely to question Claude’s reasoning and four times more likely to flag missing content. This matters because people who push back get better results.The Artifact EffectIn about 12 percent of the conversations, Claude produced artefacts. Artefacts are products created by AI, such as code, a document, an interactive tool, or an app. These tangible outputs are designed to look and feel like finished products.The research found that in...