The Real Ways People Use AI: Surprising Insights From Analyzing Billions of Interactions

OpenRouter’s analysis of over 100 trillion tokens reveals a reality far from the “AI productivity” hype. While open‑source models are chiefly used for role‑play and interactive storytelling, programming queries surged from 11 % to >50 % of all interactions in 2025, with developers feeding massive codebases to LLMs. Chinese‑origin models now account for ~30 % of global usage, and “agentic” inference—multi‑step, tool‑enabled tasks—grew to >50 % of sessions. The “glass‑slipper” effect shows early problem‑solvers gain sticky loyalty, and pricing remains only mildly elastic. These trends reshape AI product roadmaps and market dynamics.

Over the past year, the narrative around artificial intelligence has been dominated by headlines proclaiming an “AI productivity revolution.” The story has been one of AI writing emails, generating code, and summarizing documents at the click of a button. Yet a new data‑driven study suggests the reality of how people actually use AI may be starkly different from the hype.

OpenRouter, a multi‑model inference platform that routes requests across more than 300 large language models (LLMs) from over 60 providers—including OpenAI, Anthropic, DeepSeek and Meta’s LLaMA—has released a comprehensive analysis of real‑world AI usage. By examining metadata from more than 100 trillion tokens—equivalent to billions of individual conversations—the study uncovers patterns that challenge many of the assumptions driving today’s AI investments.

OpenRouter’s ecosystem is markedly global: more than half of its traffic originates outside the United States, and the platform serves millions of developers worldwide. Crucially, the analysis was performed without accessing the actual text of any interaction, preserving user privacy while still revealing high‑level behavioral trends.

The Real Ways People Use AI: Surprising Insights From Analyzing Billions of Interactions
Open‑source AI models now account for roughly one‑third of total usage as of late 2025, with notable spikes following major releases.

The role‑play revolution nobody saw coming

The most surprising finding is that more than half of all interactions with open‑source models are not related to productivity at all. Instead, users are spending the majority of their time on role‑play, interactive storytelling and gaming scenarios.

“This counters the conventional belief that LLMs are primarily used for coding, email drafting or document summarization,” the report states. “In practice, many users engage with these models for companionship, narrative exploration and structured role‑playing.”

Data show that 60 % of role‑play tokens fall under specific gaming or creative‑writing contexts, indicating a robust and previously invisible use case that is reshaping product roadmaps for AI companies.

The Real Ways People Use AI: Surprising Insights From Analyzing Billions of Interactions

Programming’s meteoric rise

While role‑play dominates open‑source usage, programming has become the fastest‑growing category across the entire AI landscape. At the start of 2025, coding‑related queries represented just 11 % of total AI interactions; by the end of the year, that share had surged to more than 50 %.

Prompt lengths for coding tasks have expanded dramatically—from an average of 1,500 tokens to over 6,000 tokens, with some requests exceeding 20,000 tokens. This reflects developers feeding entire codebases into models for deep analysis, debugging, and architectural review.

Anthropic’s Claude models have captured roughly 60 % of this programming traffic throughout 2025, though competition from Google, OpenAI and a growing cohort of open‑source options is intensifying.

The Real Ways People Use AI: Surprising Insights From Analyzing Billions of Interactions
Programming‑related queries exploded from 11 % of total AI usage in early 2025 to over 50 % by year’s end.

The Chinese AI surge

Another striking shift is the rapid rise of Chinese‑origin models, which now represent roughly 30 % of global AI usage—nearly triple their 13 % share at the start of 2025. Models from DeepSeek, Alibaba’s Qwen and Moonshot AI have each seen double‑digit growth, with DeepSeek alone processing 14.37 trillion tokens during the study period.

Simplified Chinese is the second‑most common language for AI interactions globally, accounting for 5 % of total usage behind English at 83 %. Asian AI spending climbed from 13 % to 31 % of the global total, and Singapore emerged as the second‑largest country by usage after the United States.

The Real Ways People Use AI: Surprising Insights From Analyzing Billions of Interactions

The rise of “agentic” AI

The study introduces the concept of “agentic inference,” marking the next evolutionary step for LLMs. Rather than answering isolated questions, models are increasingly being used to execute multi‑step tasks, invoke external tools, and maintain state across extended conversations.

Interactions classified as “reasoning‑optimized” grew from virtually zero in early 2025 to over 50 % by year’s end. In practical terms, users are no longer asking an AI to “write a function”; they are asking it to “debug this codebase, identify performance bottlenecks, and implement a solution,” and the model can carry out the full workflow.

The “glass‑slipper” effect

OpenRouter’s researchers identified a retention pattern they call the “glass‑slipper” effect. Models that are first to solve a high‑value, previously unmet problem earn disproportionate loyalty. For example, users who adopted Google’s Gemini 2.5 Pro in June 2025 retained at a rate of roughly 40 % after five months—significantly higher than later cohorts.

This finding challenges the notion that market share is purely a function of first‑to‑market timing. Solving a critical pain point early creates a “sticky” user base that embeds the model into daily workflows, making switching costly both technically and psychologically.

Cost doesn’t matter (as much as you’d think)

Pricing appears to be relatively price‑inelastic. A 10 % price reduction generated only a 0.5–0.7 % uptick in usage. Premium models from Anthropic and OpenAI, priced between $2 and $35 per million tokens, continue to see high adoption, while budget alternatives like DeepSeek and Google’s Gemini Flash—priced under $0.40 per million tokens—also maintain strong volumes.

“The LLM market does not behave like a pure commodity at this stage,” the report concludes. Users weigh cost against reasoning quality, reliability and the breadth of capabilities, preserving premium pricing for higher‑performing models.

What this means going forward

The OpenRouter analysis paints a nuanced picture that diverges from the prevailing industry narrative. AI is unquestionably reshaping software development, but it is simultaneously spawning entirely new categories of human‑computer interaction—most notably role‑play and creative storytelling.

Geographically, the market is diversifying, with Chinese providers now commanding a sizable share of global usage. Technologically, the shift toward agentic inference signals that LLMs are evolving from static text generators into autonomous problem‑solvers capable of orchestrating complex workflows.

Finally, the “glass‑slipper” effect underscores that competitive advantage will increasingly hinge on a model’s ability to address high‑impact use cases early, rather than merely being first to launch.

For investors, developers and enterprises, understanding these real‑world usage patterns—rather than relying on benchmark scores or marketing hype—will be essential as AI becomes more deeply embedded in everyday workflows. The gap between perceived and actual AI adoption is wider than most realize, and this study provides a data‑backed roadmap for navigating the next phase of the AI economy.

Original article, Author: Samuel Thompson. If you wish to reprint this article, please indicate the source:https://aicnbc.com/14252.html

Like (0)
Previous 7 hours ago
Next 7 hours ago

Related News