Sumo Logic has created a prototype of a generative artificial intelligence (AI) tool that creates text-based summaries from large volumes of log and telemetry data collected by its log analytics platform.

Additionally, Sumo Logic is now also making it possible to deploy detection rules as code, which makes it simpler to incorporate into a DevSecOps workflow.

Sumo Logic is adding an ability to create baselines for user and entity behavior to reduce the number of false positives that are created when relying solely on static thresholds.

Finally, the company is adding support for multiple intelligence feeds by adding support for Structured Threat Information eXpression (STIX) syntax and the Trusted Automated eXchange of Intelligence Information (TAXII) data format.

Chas Clawson, Field CTO for security at Sumo Logic, said collectively these capabilities will help DevSecOps teams keep pace with the increased velocity at which code is now being created using various AI tools. Rather than having to search log data for anomalies, it is now becoming possible to employ AI to create a natural language prompt that generates a summary of actionable insights that surfaces key patterns, extracts relevant context, and highlights likely root causes, he added.

That capability should also help bridge the device that often exists between DevSecOps teams and cybersecurity professionals that, while capable of detecting a vulnerability, often lack the context needed to accurately assess its level of severity, noted Clawson.

It’s not clear how widely AI is being applied to DevSecOps workflows, but a Futurum Research survey finds 41% of respondents now expect generative AI tools and platforms will be used to generate, review and test code. As those tools are incorporated into DevOps workflows, it’s apparent that many of the tasks that conspire to make securing software supply chains a set of tedious tasks that few DevOps teams enjoy. However, if it becomes simpler to not only identify pressing issues but also instantly create and test a patch, the overall state of application security should steadily improve.

Better yet, the number of vulnerabilities being created might eventually decline as application developers take advantage of AI-infused tools that identify issues as they are writing code.

Unfortunately, many developers are now generating code using general-purpose large language models (LLMs) that were trained using examples of flawed code that were randomly collected from across the web, which in the short term means there might be more vulnerabilities finding their way into production environments. However, it’s just as equally possible that many developers who lack security training are now creating higher-quality code thanks to an LLM.

Each organization will need to determine to what degree to rely on AI technologies to automate various tasks and processes, but at the very least, they should be creating a list of tasks that are likely to be automated using AI technologies to better understand where to focus the efforts of their human software engineering resources. It’s not likely AI will replace the need for software engineers, but it should make being one a lot more enjoyable, as more tedious tasks, such as patching applications, are increasingly automated.


Share.
Leave A Reply