Symbiotic Security, this week, launched a tool that leverages a large language model (LLM) specifically trained to identify vulnerabilities via a chatbot as application developers write code.

Company CEO Jérôme Robert said Symbiotic Security version 1 reduces the current level of DevSecOps friction that exists by identifying issues long before flawed code finds its way into a production environment.

Unlike a general-purpose large language model (LLM) that has been trained using samples of code randomly collected from across the web, the LLM created by Symbiotic Security has been trained using a proprietary dataset to provide more accurate results, including snippets of code that can be used to eliminate vulnerability, he added. Developers can accept, modify, or reject those recommendations as they then see fit, he added.

The overall goal is to eliminate reliance on a set of alerts generated by cybersecurity teams that are provided too late and lack any real context in favor of an approach that provides developers with continuous training in a way they are more likely to embrace, he added.

Additionally, Symbiotic Security version 1 provides the added benefit of providing application developers with continuous application security training that, over time, will help significantly reduce the number of mistakes being made in the first place, noted Robert.

Finally, addressing those issues earlier in the software development lifecycle (SDLC) will then reduce the number of remediation requests being generated by cybersecurity teams scanning for vulnerabilities that are being discovered sometimes even years after applications have already been deployed, said Robert. Arguably, much of the technical debt that is largely unaddressed in many organizations is traceable back to known vulnerabilities in applications that can now be eliminated as applications are constructed, he added.

As AI coding tools continue to evolve, it’s becoming apparent that a wave of tools that are specifically trained to address software development tasks is starting to become more widely available.

It’s not clear how widely AI coding tools based on general-purpose LLMs have been adopted, but it’s now only a matter of time before the next generation of these tools start to be more widely employed. In fact, a recent Futurum Research survey finds 41% of respondents expect generative AI tools and platforms will be used to generate, review and test code, while 39% plan to make use of AI models based on machine learning algorithms.

Eventually, the quality of the code being generated by AI coding tools will improve as more LLMs are trained using data sets that are well vetted. In the meantime, however, it’s highly probable that much of the code being generated by general-purpose LLMs has known vulnerabilities. As such, the overall state of application security may likely become worse in the short term before ultimately becoming a lot better than anyone a few short years ago would have ever thought achievable.

The issue now is making sure the right AI coding tools find their way into the hands of the application developers that need them most as soon as possible.


Share.
Leave A Reply