Better AI models enable more ambitious work
We are interested in understanding how improvements in AI models change how developers work. In particular, to what extent do developers perform more of the tasks they were already doing, and to what extent do better models enable work that was out of reach before?
To answer that question, we partnered with Professor Suproteem Sarkar from the University of Chicago Booth School of Business to study the work habits of developers at 500 companies using Cursor, from July 2025 through March 2026. This eight-month window included the releases of Opus 4.5 and GPT-5.2, two models that delivered step-change advances in AI coding capability.
Our paper finds that better AI leads to greater AI demand. This is consistent with a Jevons-like effect, where gains in efficiency increase total consumption rather than reducing it. AI usage, defined as average weekly messages per user, increased 44% during the study period.


The increase wasn’t immediate or uniform. We observed that developers first used better models to do more work of similar complexity, and only later began taking on more complex tasks. Moreover, the shift was especially concentrated in industries like finance, media, and advertising, where competitive forces and greenfield opportunities may have spurred adoption.
Media, software, and finance lead the way
Usage increased in every sector we studied, but the gains were larger in some industries than others. In particular, media and advertising saw the biggest jump, with a 54% increase in messages per user, followed by software and developer tools (+47%) and finance and fintech (+45%).
We hypothesize that in finance, better AI can create an arms-race dynamic, where once one firm uses AI to gain a trading edge, others face competitive pressure to follow. In media and advertising, the mechanism may be different, with more capable models expanding greenfield opportunities that firms take advantage of.


A shift right in complexity
Initially, developers did more of the same with the improved AI models, but after a lag of 4–6 weeks, we observed that they began using models for more complex tasks. Overall, the number of “low complexity” messages increased 22% over the study period, while the number of “high complexity” messages grew 68%, with most of that growth occurring during the last six weeks.
In the paper, we hypothesize that the delay reflects both the time it takes developers to discover what a better model can do, and the need for firms to reorient their workflows around new capabilities.


A changing task distribution
As AI improves at code generation, the developer’s job shifts to managing that output. This change shows up clearly in our data, where we can measure how usage evolves across task categories. The largest increases were in documentation (+62%), architecture (+52%), code review (+51%), and learning (+50%), while more self-contained tasks like UI/styling grew far less (+15%).
This indicates that as AI-generated code expands codebase size, the need to document, understand, and review that code grows in proportion. Larger and faster-moving codebases also increase the complexity of managing how it all fits together, which may explain the sharp growth in cross-system tasks like architecture and deployment. More capable models may also make developers more willing to use agents for these cross-system tasks.


Expanding economic activity
A central question around AI adoption is whether it merely facilitates existing work, or also opens up new productive opportunities. Our study indicates that it does both, but that expansion may eventually be the bigger story.