Enhancing AI-Driven Development with Automated Code Reviews
Cursor has become incredibly popular for its AI agent which lets you use powerful models to write code. The agent takes your prompt, understands the context, and starts building by creating files and implementing features automatically. However, with that much autogenerated code, errors are inevitable. Cursor often throws random errors, and you either have to prompt it again or wait for it to recover and fix them.
Now, imagine if after each segment of code it generates, you could run a review that checks for issues such as security leaks or flawed integrations. This is especially important because AI agents often leave major vulnerabilities behind, and catching them early is absolutely critical. That is exactly where Code Rabbit comes in.
Code Rabbit was originally built to review pull requests and GitHub commits by offering suggestions based on what you pushed, but now they have launched a powerful extension for VS Code, Cursor, and Windsurf. You simply plug it in, and after every implementation step from cursor, you run the extension. It analyzes the changes, identifies important refactors, highlights security concerns, and suggests improvements. Then you feed those suggestions back to the agent, and it handles the fixes. This significantly tightens up your workflow, improves your application security, and helps you reach a stable final product with fewer bugs.
Installation
Installation is simple. In cursor, open your side panel and then go to extensions. 1. Search for Code Rabbit. 2. Locate the extension and install it. 3. When you launch it, you will be prompted to sign in. 4. Click to sign in, complete the GitHub authentication in your browser, and you are all set.
The entire process is straightforward and has no complications.
A Live Demonstration
To show you a live demo of how this actually works, I am currently building an e-commerce store, a full e-commerce store with the admin panel and everything included. You will get a pretty clear idea of the workflow we are trying to achieve with this tool and how it is going to assist us throughout the process.
What actually happens with this tool is that it reads the changes you have made through GitHub. First of all, you need to initialize git in the directory you are working in. As you commit those changes, which means saving them, whenever you save anything, Code Rabbit will step in and say that it has detected changes and ask if you want those to be reviewed. You simply click on yes, and it will begin reviewing them and providing suggestions.
If you want, you can copy those suggestions back into cursor, and both tools continue working together seamlessly. This is essentially what the entire workflow is evolving towards. You can see right here that these are the changes I have made that have not yet been committed.
Initial Setup
But before we go into that, let me show you how to initialize this setup.
First of all, in a new directory, for instance, the project directory where you have not started anything yet, you simply type:
bash
git init
This initializes the git repository. After that, you add your changes using the git add
command, and the dot signifies that you are adding the entire repository.
bash
git add .
Once that is done, you use git commit
and make sure to include a message with the commit describing what it is about. This is the standard format. You do not need to worry because there are also ways to automate this, which I will be showing you, but it is important that you go through these initial steps. For example, you can simply write a message like:
bash
git commit -m "initial commit"
This will commit everything in the repository. Once you do this, Code Rabbit will activate and do its part. Every time you make a change, you will need to use git add
followed by git commit
, and this process will save the changes. Each saved change will be detected by Code Rabbit, it will run a review, and you can then pass those results over to cursor to continue your development process smoothly.
Troubleshooting a Known Bug
While testing it out, I did encounter a big issue. I spent a lot of time trying to fix it, and in the end, I went into their discord server. Someone had actually posted a solution there because many people were facing the same issue, and it seems that this is currently a known bug.
Apparently, you are supposed to have your branch visible in the UI where all your branches are listed. Once that is done, you can commit locally and continue working as needed. The tool will focus on the selected branch, and every time you commit to that branch, it will be able to review those changes.
This is the solution I ended up using. You go to the source control area, and you will see the top menu appear. You simply select the option to create a new branch. Since I have already made a lot of progress, I want to create a copy of the main branch, so I create a new one and name it the "test" branch. Now you can see that the menu appears, and any changes I make will be shown right here.
For example, in this file, I just add a comment that says:
javascript
// this is a test
I add the comment and save the file. Next, I open the terminal, and I am going to add this file to the branch and commit it with the message "test".
bash
git add .
git commit -m "test"
That is done. We now have a commit message labeled "test," and we want to test how this behaves. So we proceed, and now you can see that the review has started. This is the file we committed. The review has been completed, and I believe it has generated a few comments. Yes, it added a comment that says, "Remove stray test comment." It recognizes that this is just a test comment and not something important. I just wanted to walk you through this fix so you can understand it clearly.
The Review Workflow in Action
Now, let us go back to the project, and I will show you how the rest of the workflow plays out.
Okay, so I just implemented several other features from my implementation plan. First of all, let me try to open the project to see if it actually runs properly. You can see that there are some errors, and now I want to check whether the tool can actually detect and address these errors during its review process. I believe this part of the work falls under phase 4 of the implementation plan.
We can now see the recent changes that were made, so let us proceed with the review. You can observe that I am now setting everything up, analyzing all the changes, and reviewing the modified files. The files that were changed are listed here at the bottom. Let us take a look at what it finds and whether its review can help resolve the existing errors.
We are now ready to run the review. You can now see the list of files that were reviewed. When we open any of these, we can see that it has provided suggestions for each one. Clicking on a suggestion opens it in detail and shows us exactly what the tool recommends. The next step in this workflow is to hand these suggestions over to the cursor AI agent.
Applying Fixes with AI
The next step to getting those comments applied is that after you have opened them up, you are going to click on the "fix with AI" button. What this does is copy a set of instructions, and if you look at the bottom, you will see them labeled as "codegen instructions," which are then copied to your clipboard. After that, you simply paste those instructions into the cursor agent.
The tedious part of this process is that you have to do it one by one for each individual comment. You need to provide each comment separately, and as you already know, these AI models generally do not perform well when they are asked to handle multiple tasks at the same time.
Note: One thing I highly recommend is switching to the Gemini 2.5 Pro model because in my experience, that is the only model that can reliably handle multiple instructions at once. This allows you to go ahead and give it as many comments as you like.
I have now given it three comments from the address form, and at this point, I am just going to paste the codegen instructions into cursor and see what it generates and how it fixes the issues.
The Final Result
Okay, so this is the store that was finally built. There was an issue with components not rendering properly on the client side, but that has been resolved. The review process also played a key role in tightening up the site's security, especially in the area of password storage, which was being handled incorrectly earlier. Overall, the site is now fully functional. All the animations are working exactly as expected.
One thing I do regret is not using Shadcn components. I had instructed the agent to manually create all the components, which in hindsight was not the best decision. That aside, everything looks good and is functioning well. There are still a few features left to implement. As I mentioned earlier, I am currently at phase four of the implementation plan, so the final styling and polish will likely be completed in the upcoming phases.
Structuring Your Workflow with an Implementation Plan
At the start of the article, I mentioned that I would show you how to apply my implementation plan approach to your own projects by breaking them into small and manageable chunks. This is exactly what I meant.
You begin by briefly describing your project and defining its specifications. In my case, I was building a Next.js front-end application that would eventually be integrated with a FastAPI backend. Your project might be different, but the core idea remains the same: you describe what you are building and which tech stack you are using.
Since I was focused on building just the Next.js front end, I instructed the agent to list all the required pages and modules inside a structure.md
file that I had already prepared. After that, I asked it to generate a 10-phase implementation plan based on that structure.
To make the workflow more autonomous and avoid repeating instructions every time, you add a rule inside your project's cursor settings and configure it to always keep the agent attached. From there, the agent follows the implementation plan step by step. If it needs any clarification, it is expected to ask before proceeding.
The process follows these steps: 1. Once a phase begins, it marks that phase as "in progress." 2. It completes the implementation. 3. It then commits the changes to git locally. 4. At that point, you receive a prompt from Code Rabbit indicating that it is time to start the review process. 5. You run the review, collect the suggestions, and paste those suggestions back as feedback. These suggestions now act as the user's input. 6. After that, you instruct cursor to continue, and it returns to the implementation plan, marks the current phase as complete, and proceeds to the next one.
This creates a smooth and structured workflow that progresses phase by phase with integrated review cycles at every step.
Potential Improvements
One improvement that could really enhance this process, in my opinion, is better retrieval of the review comments. At the moment, there is a comments tab that displays everything, but if there were a way to automatically extract and paste those comments directly into cursor, the entire process would become much more efficient.
Join the 10xdev Community
Subscribe and get 8+ free PDFs that contain detailed roadmaps with recommended learning periods for each programming language or field, along with links to free resources such as books, YouTube tutorials, and courses with certificates.