
As AI coding tools accelerate software delivery, they are also intensifying a problem DevOps and SRE teams have been dealing with for years: the unchecked growth of observability data. In this conversation, the founders of Sawmills argue that telemetry volume is no longer just a cost issue. It is becoming a data quality problem that affects how effectively teams can monitor systems, troubleshoot incidents and make sense of production behavior.
Ronit Belson and Erez Rusovsky describe how the rise of AI-generated code is making observability harder to manage. Instrumentation is often treated as an afterthought, which means more logs, metrics and traces are being generated without much discipline around relevance, quality or downstream impact. The result is familiar to many DevOps teams: rising observability bills, more noise in monitoring systems and growing difficulty separating useful telemetry from unnecessary data.
Rather than waiting until data lands in production systems and then trying to reduce cost or improve signal quality after the fact, Belson and Rusovsky describe a model in which telemetry is reviewed and optimized closer to the point where code is written and deployed. That includes looking at instrumentation itself, not just the systems consuming it.
They also touch on a broader operational shift now underway. As organizations become more comfortable with agentic AI, there is growing interest in using agents not only to write code, but also to continuously manage repetitive operational work. In the observability context, that means identifying noisy telemetry, highlighting gaps, feeding lessons from production back into development and helping teams keep data both useful and affordable.
The bigger takeaway is that observability can no longer be treated as a downstream concern. If AI is going to keep increasing the speed of software creation, DevOps teams will need stronger control over the telemetry that software generates.

