CodeRabbit today added support for the open-source Visual Studio Code editor to its platform that employs generative artificial intelligence (GenAI) to review code.

Company CEO Harjot Gill said adding support for VS Code will significantly expand the reach of a platform that is already integrated with AI coding tools such as Cursor and Windsurf.

CodeRabbit is primarily used within any Git-based repository to review code as commits are being made. That approach ensures that all code destined for a production environment has been reviewed by an AI platform that is accessed via a natural language chat interface.

Alternatively, application developers can also embed CodeRabbit to review code in real time within the tooling they are using to write code.

The overall goal is to surface routine mistakes in a way that developers can either catch themselves or enable a human reviewer of code to focus on more complex issues, said Gill.

That capability is going to be especially critical in an era where the volume of code being generated using AI tools will far exceed the ability of anyone to manually review that code, he added.

CodeRabbit traverses each code repository in the Git platform, prior to pull requests and related Jira/Linear issues, using a code graph analysis capability that identifies code dependencies across files, and custom instructions using Abstract Syntax Tree (AST) patterns. Used by nearly 5,000 customers to cut the time spent reviewing code in half, CodeRabbit can also pull dynamic data from external sources such as a large language model (LLM) as needed, noted Gill.

Additionally, CodeRabbit will generate crisp summaries of code changes, whether they affect a single line or add an entire new feature.

In contrast, existing code review tools are much more limited, he added. Modern code review cycles require developers to understand not just the quality and behavior of new code that has been generated, but also adhere to coding practices at the organization level, best practices and syntax in individual programming languages in addition to understanding file dependencies that impact other parts of the code and any conformity to security policies that might be required.

It’s not clear how widely AI is being applied to DevOps workflows, but a Futurum Research survey finds 41% of respondents now expect generative AI tools and platforms will be used to generate, review and test code.

While much of the initial focus on using AI to automate tasks has been on writing code, AI tools that review code will become just as widely adopted, if for no other reason,n code reviews are a task that not many developers enjoy, said Gill. Many organizations have historically set up two-person teams to review each other’s code. Now, more of that time can be allocated to either writing code or fine-tuning the code that might have been generated by an AI tool.

It’s still relatively early as far as the adoption of AI in DevOps workflow, but as more tedious tasks continue to be automated, it’s becoming clear that much of the drudgery that often conspires to slow down the pace of application development will steadily be reduced if not outright eliminated.


Share.
Leave A Reply