Composer history lets you access previous composer sessions after restart. You can also edit and resubmit from previous messages within a session.
We have made slight improvements to Debug with AI and added back @Lint Errors in Chat.
VS Code 1.93.1: Cursor is now based on VS Code 1.93.1.
Python auto import for Cursor Tab is much more stable in this release.
Switching models is a lot easier with model search (Cmd-option-/) in the chat, composer, and cmd-k input boxes.
Composer now only applies files that are in context to prevent hallucinations.
Using cursor . with WSL should now be more stable.
UPDATE (0.42.1 - 0.42.5): Fixes the following upstream security issue: CVE-2024-43601. Also fixes a few composer bugs and a bug with Cursor Tab. Allows composer to auto apply to files not in its context. Also includes additional mitigations to CVE-2024-48919. Reduces a few long-tail connection errors. Adds escape hatch when Claude predicts the wrong filepath in chat.
This update fixes the following security issue: CVE-2024-45599.
Cursor Tab now auto-imports symbols in Python files! We've also significantly improved the Cursor Tab stability.
Composer Notepads (previously called Projects) can now include tagged files and be referenced in chat, as well as composer.
Composer can now be added to the AI pane. This release also includes many stability fixes and image support!
Apply and Composer are slightly faster in this release.
We've added support for using Cursor on Macs over Remote SSH.
UPDATE (0.41.1 - 0.41.3): Improves onboarding UX, fixes a bug with composer cancellation, fixes the Apply button not working on some codeblocks, and fixes a bug where Cursor Tab sees malformed edits.
We have a new chat UX! Excited for you to try it out and share your thoughts.
Composer is now default-on and available to all Pro/Business users by hitting cmd+I. We've added Composer Projects (beta), which allows you to share instructions among several composers.
We've also trained a new Cursor Tab model that's smarter and more context-aware.
Auto imports (beta) for Cursor Tab for TypeScript files - when Tab suggests an unimported symbol, we'll now auto-import it to your current file. You can enable it in Settings > Features > Cursor Tab!
UPDATE (0.40.1 - 0.40.4): Fixes a bug with apply on remote ssh, a few chat bugs, speeds up Cursor Tab for Europe/Asia users, fixes some outstanding Cursor Tab bugs and notifications hiding the chat input, and includes a fix for Cursor asking for permissions for files in your ~/Library folder on MacOS (upstream issue: microsoft/vscode#208105)
Cursor Tab (previously called Copilot++) defaults to chunked streaming. This build also includes several Cursor Tab speedups. More to come in future builds!
Concurrent composers support, composer control panel, and various bug fixes such as accepted files being deleted.
Faster Cursor Tab Suggestions!
UPDATE (0.39.1 - 0.39.6): Fixes several Cursor Tab rendering bugs, a bug where the file explorer was not responsive, and a bug where Cursor Tab would hang.
Copilot++ now has chunked streaming (currently in Beta)! It will surface edits faster in smaller chunks. To enable it, click the settings gear and enable "Chunked Streaming" under Features > Copilot++.
We've also added a file picker, arrow key navigation, and a model toggle to Composer. This release also patches a few outstanding Composer bugs.
VS Code 1.91.1: Cursor is now based on VS Code 1.91.1.
New Default Model: We've made Claude 3.5 Sonnet the default model for users.
UPDATE (0.38.1): Fixes a bug where OpenAI API Key users would be migrated to Claude 3.5 Sonnet
This build comes with a new experimental multi-file editing feature. To enable it, click the settings gear, head to Beta, and activate "Composer." To use it, hit Cmd+I. We'd love to hear your thoughts.
Remote tunnels are now supported! Remote SSH support is also more robust (now supports multiple proxy jumps, among other things).
Adds context pills to chat messages, so you can see what will be/was used
Cmd K context-building improvements
Fixes partial completions with Copilot++ on Windows/Linux
UPDATE (0.35.1): Disables Copilot++ partial accepts by default and makes the keybinding configurable (go to Cursor Settings > Features > Cpp to re-enable). Makes gpt-4o the default model.
Gemini 1.5 Flash is available in long-context mode
Accept partial completions with Copilot++
Better performance of Copilot++ on linter errors
Toggleable rerankers on codebase search
GPT-4o in Interpreter Mode
UPDATE (0.34.1-0.34.6): Fixes long context models in the model toggle, an empty AI Review tab, Copilot++ preview bugs, the Mac icon size, and remote ssh fixes.
Stability: This build fixes a connection error problem that was consistently affecting some users. It should also improve the performance of Cursor on spotty internet.
Command-K Autoselect: We've also added automatic selection for Command-K! This means you can now press Command-K, and it will automatically select the region you're working on, though you can still select manually if you prefer.
UPDATE (0.33.1-0.33.3): Fix to settings toggles, fix to Copilot++ diffbox performance, onboarding tweaks.
Copilot++ UX: Suggestion previews now have syntax highlighting, which we find makes it much easier to quickly understand the changes.
Cursor Help Pane (Beta): You can also ask Cursor about Cursor! The Cursor Help Pane has information about features, keyboard shortcuts, and much more. You can enable it in Settings > Beta.
New GPT-4 model: As of a couple of days ago, you can try out gpt-4-turbo-2024-04-09 in Cursor by toggling it on in Settings > Models.
.cursorrules: You can write down repo-level rules for the AI by creating a .cursorrules file in the root of your repository. You might use this to give for context on what you're building, style guidelines, or info on commonly-used methods.
UPDATE (0.32.1-0.32.7): Fixes a performance issue with the new Copilot++ syntax highlighting, changes AI Notes to be default disabled, changes the naming of the main Copilot++ model to legacy, fixes Copilot++ being slower over SSH, fixes to the Copilot++ preview box.
Long Context Chat (Beta): This is a new experimental feature that lets you talk with lots of files! To enable it, head to Settings > Beta. Then, select "Long Context Chat" in the top right of a new chat and try @'ing a folder or the entire codebase.
Fixes: This release patches a bug where empty / partial responses are shown in chat.
UPDATE (0.31.1 - 0.31.3): Adds back in AI Review (alpha), fixes the "Cursor Settings" menu item, and fixes a bug where @web doesn't return a response.
Faster Copilot++: We've made Copilot++ ~2x faster! This speed bump comes from a new model / faster inference. ~50% of users are already on this model, and it will roll out to everyone over a few days. If you'd like to enable the model immediately, you can control your model in the bottom bar of the editor.
Stable Claude Support: All the newest Claude models are available for Pro and API key users. Head to Settings > Models to toggle them on. Pro users get 10 requests / day for free and can keep using Claude at API-key prices for subsequent requests.
Team invites: We made it a bit easier for you to invite your colleagues to your Cursor team. You can send these from the editor's settings or at cursor.com/settings.
Admin improvements: Team admins can now mark themselves as unpaid users and can see the last time team members used the product.
New Settings: We moved all our settings to be accessible by the gear in the top right. No more "More" tab!
If you’re a Pro or Business user, you can add "claude-3-opus" as a custom model in the Settings page and use 10 fast requests per day for free (unlimited slow, but the delay increases exponentially).
We expect to roll out a more permanent solution (including API key users) very soon.
AI Notes enabled by default (hold shift on any symbol), better in-editor chat, auto-execute interpreter mode, better onboarding styling, nicer feedback modal, and a few stability fixes.
UPDATE (0.29.1): Fixes a bug where the Copilot++ sometimes would not show a suggestion even if one existed, a bug where the hint line would sometimes cover the ghost text, and a bug where AI Notes would not work on Windows.
Linter: You can now turn on an AI linter in the "More" tab beside Chat. It'll scan your file for small bugs every time you save.
Interpreter Mode: We've made some big improvements to the backend powering interpreter mode! It should now be much better at using tools and understanding your project.
UPDATE (0.27.1-0.27.4): Fixes to Windows build, chat context UI, onboarding.
AI Previews: this is an experimental new code reading feature. After enabling in the "More" tab beside Chat, just hold shift to see a generate some quick notes about the symbol you're on. If you'd like us to dedicate more time to this direction, please let us know.
Other changes:
Fine-grained chat replies (start by hovering over the area of the response you want to reply to)
Copilot++ quality of life improvements (show ghost text more often, toggle on/off on the status bar, make it easier to see the suggestion box)
Smoother onboarding (fix Windows settings import, option to import folder/window state)
Hold down cmd-I over a selection to heal the code with GPT-4. Useful for writing pseudocode and having the AI convert it into correct code. Please let us know if you find it useful!
Using @Web in chat will give the AI the ability to crawl the web! The tools it can use include a search engine and a documentation site crawler.
This feature is still experimental. We're very interested in improving the ability of the AI to understand external libraries, and your thoughts will help us improve :).
Both pro and API key users can also try out gpt-4-0125-preview by configuring the model under Settings > OpenAI API > Configure Models. We’re testing the new model right now for pro users to see if it performs better than all older versions of gpt-4. If so, will roll out as the default experience.
UPDATE (0.24.3-0.24.4): Adds ability to configure OpenAI base URL, fixes getcursor/cursor#1202.
"cursor-fast": This is a new model available in command-k and chat. Expect it to be a bit smarter than gpt-3.5, and with many fewer formatting errors.
Apply button: We've added some polish to the "apply codeblock" experience in chat.
Chat lints: If the AI suggests a code change in chat that involves a made up code symbol, we'll underline it. Availble for Python, Typescript, Rust.
More chat symbol links: When the chat references a code symbol, you'll often be able to click directly to it.
UPDATE (0.23.3-0.23.9): Fixes to Command-K, changelog auto-opening, editing very long lines with Copilot++, the "delete index" button, connection errors being silenced, and proxied authentication.
Cursor-Fast is a newly trained model between gpt-3.5 and gpt-4 coding capabilities, with very fast response times (~3.5 speed).
We'll be working to improve its performance over the coming weeks. Counts the same as 3.5 usage.
Hold down command, press and release shift, and continue holding down command. This will trigger tha AI to rewrite code around your Cursor — you can think of it as a manually triggered GPT-4-powered Copilot++. You can use it to write pseudocode and have the AI correct it, or for tedious refactors where Copilot++ doesn't quite suffice.
Cursor is now based on VS Code 1.85.1, which, among other things, includes floating editor windows. Just drag and drop an editor onto your desktop to try it out.
@ previews: We made it easier to see what codeblock you're @'ing.
Copilot++: We've contined to improve the Copilot++ ghost text experience. Surprisingly, many of us now enjoy using Copilot++ without any other autocomplete plugin installed.
AI Review (beta): This is a new experimental feature that let's GPT-4 scan your git diff or PR for bugs. You can enable it in the "More" tab beside chat. Feedback is much appreciated.
UPDATE (0.20.1-0.20.2): We added TLDRs to make it easier to sort through the bugs flagged by the AI review and fixed a bug with "Diff with Main."
Better context chat: in particular, followups are now smarter!
Faster Copilot++: a few hundred milliseconds faster, through various networking optimizations. We still have several hundred additional milliseconds to cut here.
More reliable Copilot++ changes: less flashing, better highlighting of what's new.
Image Support in Chat: You can now drag and drop images into the chat to send them to the AI.
Interpeter Mode Beta: You can now enable Interpreter Mode in the "More" tab. This gives the chat access to a Python notebook, semantic search, and more tools.
@ folders: You can now use the @ symbol to reference specific folders! We'll try to pick out the most relevant code snippets to show the AI.
Copilot++ Improvements: We've spent some time improving the latency of Copilot++, and you can change the Copilot++ keybinding to not be Option/Alt. More to come here soon, especially on the model itself!
Experimental feature: go to "More" > "Interpreter Mode" to enable it. Gives the model access to a Python notebook where it can take actions in the editor. Please give us feedback in the Discord! We'd love to know if you find it useful.
Integrates GPT-V into chat for the editor. This means you can take screenshots/images and drag it into the chat inputbox to have Cursor use them as context.
This is an experimental feature and very limited by capacity, so you may see errors during traffic spikes.
Copilot++ improvements: Includes green highlights to see what Copilot++ has added, the ability to accept multiple Copilot++ suggestions immediately one after another, support for Copilot++ on SSH, and fixes to how Copilot++ UI interacts with autocomplete plugins.
Bug fixes: Fixed a bug where Cmd-k could get into a bad state when removing at the top of a file. And another that was causing some files to not be indexed.
Command-dot: you can now use the Command-dot menu to have Command-K fix lint errors inline.
New models: you can plug in your API key to try out the newest gpt-4 and gpt-3 turbo models. We're evaluating the coding skills of these models before rolling out to pro users.
Apply chat suggestions: click on the play button on any code block to have the AI apply in-chat suggestions to your current file.
Copilot++ (beta): this is an "add-on" to Copilot that suggests diffs around your cursor, using your recent edits as context. To enable it, go to the "More" tab in the right chat bar. Note: to cover the costs of the AI, this is only available for pro users.
This is very experimental, so don't expect too much yet! Your feedback will decide which direction we take this.
Cursor is now based on VS Code 1.83.1. This ensures that the newest versions of all extensions will work without problem in Cursor. Thank you to everyone who urged us to do this on the forum!
Also, there's an experimental bash mode: enable it in the settings, and let the chat answer questions with the help of running bash commands. If you find it useful, please let us know, and we will spend more time on making it production-ready!
Update: this change resulted in a problem with SSHing into old Linux distros. This has now been fixed!
Bug fixes: (1) .cursorignore now completely respects .gitignore syntax, (2) codebase queries use the embeddings index if >= 80% of it is indexed, instead of requiring the entire thing to be indexed, (3) removed fade-in animation on startup, (4) no longer overrides cmd-delete in the terminal, (5) fixes problem where cmd-F randomly has the case-sensitive option enabled, (6) inline gpt-4 is turned off until we figure out a better UX, (7) even more stable and fast indexing, (8) progress indicator in search and extensions, (9) bug where an incorrect bearer token is passed to the server.
Indexing should now be faster, more stable, and use less of your system resources. You can also configure ignored files in a .cursorignore. The controls are in the "More" tab.
Cmd-k is now in the terminal! A bit hackily implemented, but surprisingly useful.
Ask about git commits and PRs using @git in chat!
Use /edit in the chat to edit a whole file (if less than 400 lines). Expect the edits to be fast and GPT-4-quality. This uses non-public models, and is for now only available to users who do not use their own API key.
Bugfixes! Fixed the "ejected from slow mode" UI, added auto-switching logic for the model type when switching on API, improved @ symbol speed, fixed Windows keycommand to be Ctrl-Shift-Y instead of Ctrl-Y, and more.
You can now use an early preview of /edit in the chat! It's significantly faster at editing entire files than just using cmd-k. The /edit feature is currently only supported if you do not use your own API key, since it relies on non-public models.
You can now alternate between diffs and texts responses in Cmd-K. This can be helpful for clarifying the models thinking behind a diff or for getting quick inline answers to questions about a file.
Ask a question about an edit that the model with option+enter, or bring in context from the chat with @chat! An early preview, so please let us know what you think.
The defaults for cursor python are different than pylance, which has affected several users. We make them
closer to the pylance defaults in this update.
The main addition in this update is better docs support. This means you can add and remove docs and inspect the urls
that are actually being used for each doc you have uploaded. You'll also be able to see what webpages end up being shown to GPT-4 to provide you an answer.
You can also paste a url into the chat and the model will automatically include it in the context being used.
Teams can also share private docs.
Staged Rollouts
Following this update, future updates should come as staged rollouts. This will mean greater guarantees of stability and more frequent updates.
Long files in chat
We continued to improve the experience of chatting with large files. If you @ multiple files that are too large to fit in GPT-4's context window, we'll intelligently pick the most relevant chunks of code to show to show the model.
Bug fixes:
Copy Paste chat text form Jupyter
Some chat focus issues
UI tweaks
Better state management - prevents crashes from editor using too much memory
We no longer store any full files in memory and prevent users from @'ing very large files (> 2MB)
This should reduce any significant memory issues that users have been experiencing.
Updates to docs (try pasting a link in chat, you can delete/edit docs, you can see citations), @ symbol performance on long files in chat should improve, and more.
Applies the patch from Github across all your WSL (Windows Subsystem for Linux) distros, either automatically or through the "Fix WSL" command palette command.
You can now reply to Cmd-K outputs, making it much easier to have the model revise its work.
If you @ reference a long file that will be cutoff by the context limit, you'll be given the option to automatically chunk the file and scan it with many GPTs.
Codeblocks and code symbols in "with codebase" responses will now often be clickable.
Follow-up chat messages to "with codebase" will keep the codebase context.
Nicer error messages in the chat! Fewer annoying popups.
Activity bar elements can now be reordered with drag-and-drop.
SSH support is now more robust! Please continue to let us know if you are experiencing any SSH problems.
The cmd-k edits now support followups! This is an early preview — please let us know of any annoyances or bugs, or if you preferred the no-followup mode. Please please send all feedback you have in the Discord channel!
Improved linter! Please give us feedback on the linter suggestions using the smiley faces. If you would like, you can make it more aggressive by going to the More tab and then clicking on "Advanced linter settings" at the bottom. Let us know what you think in the Discord.
If you press ⌘/^+↩️ on any line, you will now get gpt-4 powering fast-completions for you!
We know that sometimes all of us want copilot to write an entire function or a big chunk of code.
But copilot can be slow and sometimes just not smart enough :(. So we're trying to solve this with a
new completion experience powered by gpt-4. Just press ⌘/^+↩️ and you'll get a long completion from gpt-4.
Better support for remote-ssh
Remote-ssh is now built-in to cursor. You do not need to edit the behavior, it should just work :)
We know this has been a big blocker for many users that rely on remote machines for development.
If you are still running into issues, please let us know and we will fix it ASAP.
AI linter
The AI linter is now enabled for everyone in pro! The AI will highlight suspicious parts of your code in blue. You can also add your own lint rules that you want that are easy to express in natural language but aren't covered by traditional linters.
This nightly build comes with experimental interface agent support!
The goal: you write an interface specification, and an agent writes both the tests and the implementation for you. It makes sure that the tests pass, so you don't even need to look at the implementation at all.
We think this may enable a new kind of programming, that's kind of different to what we're all used to. Please experiment with it and let us know your thoughts in the Discord channel.
How to use it:
It only works in Typescript with vitest or mocha as test runners right now.
Hit Cmd-Shift-I, and give your new interface a name.
Write the methods you want your interface to have.
Hit Cmd-Shift-Enter, and the AI will write the interface for you!
Welcome to the first nightly release! It comes with agents, which we aren't releasing to the general public yet because we aren't convinced that they are useful. If you like them, please let us know what you use them for!
Cmd+K's UI has been changed: it's in-editor, "sticky," and @-symbol-compatible.
We hope it helps you stay in flow and more quickly iterate on your prompts.
(Also, you can now use up/down arrows for history in chat.)
Also, Cursor's AI will now use popular documentation to improve the answers to your questions. For example, if you ask it "how do I grab all s3 buckets with boto3?" it will search over the boto3 docs to find the answer. To add your own documentation or explicitly reference existing docs, type '@library_name' in chat.
Bug fixes:
Long code selections no longer brick the editor
Auto-fixing errors no longer brings up the problems view (in particular, this fixes an annoying bug if you have auto-fix on save turned on)
The chat has been revamped! You can now use @ symbols to show files/code/docs to the AI. The chat history is improved, it's easier to see what the AI can see, and codeblocks auto-format on paste.
You can now have the AI read documentation on your behalf! This will improve it's ability to answer questions about your favorite libraries. To use this feature, simply press the "Docs" button in the top right of the chat pane.
Switching between models is a lot more easy, and the transition to gpt-4 is a lot smoother
Please give us feedback!!
Please keep the bug reports rolling! We really are listening!
We've added a new feedback button at the top right of the app. - We really do
listen to your feedback, and your bug reports! We've fixed a lot of bugs in the
last few weeks, and we're excited to keep improving the product. - We made this
modal to make it easier to report feedback. Please keep the feedback coming!
One-Click Extension Import from VS Code (beta). As a highly requested feature, we're excited to present the beta version of one-click extension imports!
Alpha feature: 🧠 Alpha feature: Ask questions about your entire repo 🛠️. We are experimenting with ⌥+enter in the chat! The feature allows the model to think deeply about your response, search through files, and deliver a well-crafted answer. While it's in alpha, we're working hard to enhance this feature in the coming weeks. We'd love to hear your feedback!
Bug Fixes
Improved prompting for edits and generates
Fixed login bugs
Added the ability to hide the tooltip (Cursor config > Advanced > Chat/Edit Tooltip)
Extended prompt length for project generation
GPT-4 support now available for project generation