The productivity impact of coding agents
A new study from the University of Chicago finds that companies merge 39% more PRs after Cursor's agent became the default.
The productivity impact of coding agents
A new study from the University of Chicago finds that companies merge 39% more PRs after Cursor's agent became the default.
Improving agent with semantic search
Semantic search significantly improves coding agent performance with 12.5% higher accuracy, improves code retention and decreases dissatisfied user requests.
Composer: Building a fast frontier model with RL
Composer is our new agent model designed for software engineering intelligence and speed.
Improving Cursor Tab with online RL
Our new Tab model makes 21% fewer suggestions while having 28% higher accept rate.
1.5x faster MoE training with custom MXFP8 kernels
Achieving a 3.5x MoE layer speedup with a complete rebuild for Blackwell GPUs.
Iterating with shadow workspaces
Hidden windows and kernel-level folder proxies to let AIs iterate on code without affecting the user.
More problems
Several exciting problem areas for the next phase of AI-programming.
Editing Files at 1000 Tokens per Second
A new model and inference method for high-accuracy full-file edits at 1000 tokens/s.
Our problems
A list of problems we are excited to solve for Cursor.
Inference characteristics of Llama
A primer on inference math and an examination of the surprising costs of Llama.
The productivity impact of coding agents
A new study from the University of Chicago finds that companies merge 39% more PRs after Cursor's agent became the default.
Improving agent with semantic search
Semantic search significantly improves coding agent performance with 12.5% higher accuracy, improves code retention and decreases dissatisfied user requests.
Composer: Building a fast frontier model with RL
Composer is our new agent model designed for software engineering intelligence and speed.
Improving Cursor Tab with online RL
Our new Tab model makes 21% fewer suggestions while having 28% higher accept rate.
1.5x faster MoE training with custom MXFP8 kernels
Achieving a 3.5x MoE layer speedup with a complete rebuild for Blackwell GPUs.
Iterating with shadow workspaces
Hidden windows and kernel-level folder proxies to let AIs iterate on code without affecting the user.
More problems
Several exciting problem areas for the next phase of AI-programming.
Editing Files at 1000 Tokens per Second
A new model and inference method for high-accuracy full-file edits at 1000 tokens/s.
Our problems
A list of problems we are excited to solve for Cursor.
Inference characteristics of Llama
A primer on inference math and an examination of the surprising costs of Llama.