Grafana Labs, this week at the GrafanaCON 2025 conference, revealed it is revamping its open source visualization platform to provide a more consistent set of application programming interfaces (APIs) and a JSON data schema to streamline integrations.
In addition, Grafana Labs is embedding a large language model (LLM) that promises to make it simpler to create dashboards and interrogate data using a chat interface that is available in a private preview.
At the same time, the company is also donating Belya, an instrumentation tool that is based on extended Berkeley Packet Filtering technology, to the OpenTelemetry agent software project being advanced under the auspices of the Cloud Native Computing Foundation (CNCF).
Additionally, IT teams can also automatically synchronize Grafana dashboards to a GitHub repository and review changes using pull requests. There is also now a set of coding tools that can be used to integrate dashboards into DevOps workflows using, for example, a command line interface (CLI) tool dubbed GrafanaCTL.
Grafana Labs also announced the general availability of Grafana k6 1.0, an open-source load testing tool that the company acquired in 2021, and extended the reach of the Grafana Drilldown tools for exploring data to now include traces as well as logs and metrics.
Finally, Grafana Labs is adding support for SQL Expressions along with support for 15 additional data sources, including DynamoDB, CosmosDB, Cloudflare, Atlassian Statuspage and Pagerduty.
Richi Hartmann, senior developer programs director for Grafana Labs, said the goal is to provide a consistent set of APIs and JSON data schema to make it simpler to integrate Grafana dashboards across a wide range of applications.
Grafana Labs has been fortunate in the development of its AI tools because its open-source software has already been exposed to large language models (LLMs), he added. In comparison, providers of proprietary platforms have to employ a range of other techniques to expose documentation and other relevant content to an LLM.
Less clear is how instrumenting application environments might evolve as eBPF becomes more widely employed. Designed to enable custom software programs to run at the kernel level of an operating system, eBPF can be used to collect a wide range of telemetry data about multiple applications without having to build and deploy agent software for each one.
However, there are still Java, .NET and C# applications that have small virtual machines that agents can leverage to collect additional telemetry data, noted Hartmann. Each DevOps team will need to determine to what degree the data being collected is sufficient before deciding to more deeply instrument an application by deploying additional agent software that they will need to maintain and secure, he added.
The challenge then becomes determining how much of that telemetry data may need to be saved once it has been analyzed.
Regardless of approach, the one certain thing is that as more versions of Linux operating systems that support eBPF are deployed, the easier it is becoming to take advantage of visualization, and AI tools to surface actionable insights from telemetry data are now more readily available than ever.