Sysdig6

Sysdig this week at the RSA Conference (RSAC) revealed it has created a runtime that makes it possible to securely deploy artificial intelligence (AI) coding tools.

Jonas Rosland, director of the open source program for Sysdig, said the runtime makes it possible to monitor the activity of AI coding agents in real time, including potential credential risks. It also enables investigation of incidents involving AI agent activity, he added.

Additionally, AI agents can be prevented from opening sensitive files or bypassing credential controls. Risky command-line arguments that weaken safeguards, such as allowing unrestricted file writes, are also prevented.

Dangerous activity with developer environments, including reverse shells, binary tampering, and persistence mechanisms, can also be prevented.

As AI coding tools are made available to both professional and citizen-developers alike, the likelihood of a cybersecurity incident involving these tools continues to rise. DevSecOps teams need to enable application developers to safely add plug-ins and skills to these tools. The challenge is that skills can also include a malicious prompt that could potentially instruct an AI agent to, for example, delete a database running in a production environment, noted Rosland.

The only way to prevent those types of attacks is to be able to observe what an AI agent is doing at a granular level, he added. Otherwise, the speed at which AI agents are able to access directories, files and external resources will overwhelm and bypass existing safeguards, said Rosland.

Each application development team will need to determine to what degree to invest in AI coding tools, but a recent Futurum Group survey finds a full 60% of respondents reporting their organization is now actively using AI to build and deploy software.

Mitch Ashley, vice president and practice lead for software lifecycle engineering for the Futurum Group, said AI coding agents operating in development environments have created a runtime threat surface that governance policy alone cannot contain. Sysdig’s agent runtime signals that controlling AI coding tool behavior requires execution-layer enforcement to provide real-time visibility into agent activity, he added.

For DevSecOps teams extending AI tools to citizen developers, the risk is concrete, noted Ashley. Malicious skills can instruct agents to access unauthorized credentials, tamper with binaries, or delete production data. Runtime enforcement is the control layer that determines what agents can actually execute, said Ashley.

No one knows for sure how proactive DevSecOps teams will be when it comes to securing AI coding tools. It may require a few more major incidents until the full scope of the threat is fully appreciated. In the meantime, DevSecOps teams should remind application developers who experiment with emerging technologies of the potential risks. The more isolated the application development environment being used to write code using AI agents the better. After all, the danger only really manifests when AI agents start accessing every data source available so the most important thing may be to create a sandbox where they are contained to the fullest extent possible.

Share.
Leave A Reply