Sonar2

Sonar this week launched an Agent Centric Development Cycle (AC/DC) framework that promises to modernize continuous integration (CI) in the age of artificial intelligence (AI) coding.

Announced at an online Sonar Summit, the AC/DC framework incorporates multiple tools and platforms the company has developed to better secure software supply chains, including now in beta Sonar Context Augmentation tool that provides guidance in real-time to AI coding tools and a SonarQube Agentic Analysis service to analyze code that can be accessed via a command line interface (CLI) or Model Context Protocol (MCP) server provided by Sonar.

Additionally, Sonar is now making available in beta a SonarQube Remediation Agent that automatically repairs quality gate blockers in pull requests.

Sonar is also now making generally available a previously launched SonarQube Architecture service to provide DevOps teams with a systems-level view of blueprints that can then be used to automatically detect architectural drift.

Finally, Sonar is also making available in beta a tool for builders of large language models (LLMs), dubbed Sonar Sweep, that remediates the code being used to train them to help ensure higher quality output.

Lauren Hanford, a product leader for Sonar, said as AI coding tools are embedded into workflows there is a clear need for a change to CI workflows. Precisely how those workflows will need to adapt to accommodate AI coding agents has not yet been fully determined, but the need for greater governance and tools to discover and remediate vulnerabilities in code is becoming apparent, she added.

For example, in many cases AI agents will be directly invoking tools and platforms to perform a task in collaboration with other AI agents but a set of best practices for managing those interactions has yet to be defined, said Hanford. Otherwise, DevOps teams might soon find their workflows are being overwhelmed by continuous pull requests, she noted.

In the absence of those best practices, it also becomes more difficult to onboard an AI agent in a way that focuses them on a specific task or role, versus trying to continually access code that may now be outdated or simply represent some type of technical debt that has not yet been addressed, said Hanford.

Mitch Ashley, vice president and practice lead for software lifecycle engineering for the Futurum Group, said the era of simply scanning code and reporting the results later is over. If agents are writing the code, security and quality analysis must live inside the agent’s context window, not only as a post-commit process, he added.

This is where security, observability, and governance friction is eliminated by design, noted Ashley.

The pace at which AI coding tools are being adopted naturally varies but just about every DevOps team by now has at least experimented with these tools, with a significant portion now moving to incorporate AI coding tools into their workflows. The challenge, of course, is finding a way to govern those workflows to ensure that the output being created doesn’t wind up creating more trouble than it’s actually worth.

Share.
Leave A Reply