Imandra today launched an artificial intelligence (AI) agent, dubbed CodeLogician, that uses symbolic models to transform source code into mathematical models that can then be validated and tested in a way that is more accurate than relying on a large language model (LLM).
Using neurosymbolic models, those mathematical representations can then be analyzed using an automated reasoning engine that is invoked via a cloud service, using the open-source Langraph framework for orchestrating the management of AI agents. Initially, CodeLogican can be applied to Python code, with support for Java, COBOL and other programming languages planned.
Dr. Grant Passmore, co-CEO of Imandra, said that capability makes it simpler to discover bugs, validate properties, explore state spaces, and automatically generate test cases using an AI model that, from an infrastructure perspective, is much more efficient than an LLM.
An LLM still plays a critical role in generating code, but other software engineering tasks lend themselves better to a symbolic AI model, he added. In fact, a symbolic AI model provides a perfect complement for debugging code generated by an LLM in a way that is more accurate, he added.
Symbolic AI has been used for decades to verify high-speed trading and other mission-critical applications based on knowledge systems to create human-readable representations of problems. The core ImandraX engine developed by Imandra makes those capabilities available as a service that can be invoked via an application programming interface (API). It can also be added to a VS Code integrated development environment (IDE).
Regardless of which type of AI model is used, usage of these tools and platforms continues to increase exponentially. A recent Futurum Research survey finds that 41% of respondents expect generative AI tools and platforms will be used to generate, review and test code, while 39% plan to make use of AI models based on machine learning algorithms. More than a third (35%) also plan to apply AI and other forms of automation to IT operations, the survey finds.
Arguably, as more AI is applied to software engineering, the types of AI models employed will only become more diverse. Rather than thinking of LLMs as a hammer to be applied everywhere, DevOps workflows will include a mix of AI models that are optimized to perform specific tasks using the least amount of IT infrastructure possible. The challenge and the opportunity now is determining which AI models to employ both today and tomorrow, as additional advancements continue to be rapidly made.
In the meantime, DevOps teams should be creating a list of the manual tasks they regularly perform today that might soon be better handled by an AI agent. It’s not likely AI agents will replace the need for DevOps engineers any time soon, but much of the toil that conspires to make application development difficult will be sharply reduced. It’s not certain whether the elimination of that drudgery will lead to more applications being deployed, but the one clear thing is that building and testing them might soon become a lot more enjoyable for all concerned.