A survey of 700 engineering leaders (350) and developers (350) from the U.S. and United Kingdom (UK) conducted by Harness finds less than half have access to real-time insights into idle cloud resources (43%), unused or orphaned resources (39%), and over or under-provisioned workloads (33%).

More troubling still, more than half of the application developers (55%) said purchasing decisions are ultimately based on guesswork and 32% have fully automated practices to enforce cost savings. On the plus side, 62% of developers want more control over, and responsibility for, managing cloud infrastructure costs.

Overall, half of all respondents (52%) admitted there is a disconnect between their application development teams and FinOps functions within their organization.

Ravi Yadalam, senior director of product management at Harness, said that disconnect results in about 21% of spending on cloud infrastructure, estimated to be $44.5 billion, is being wasted.

Much of that waste can be attributed to both over-provisioning of cloud infrastructure and a general tendency to not proactively manage consumption of resources, noted Yadalam. For example, 71% of developers do not use spot orchestration, 61% do not rightsize instances, 58% do not use reserved instances or savings plans, and 48% do not track and shut down idle resources.

In general, organizations on average require 31 days to identify and eliminate cloud waste, such as idle, orphaned, or unused resources. It also takes about 25 days to detect and rightsize overprovisioned cloud resources.

In theory, embedding the tracking of cost metrics and analytics into DevOps workflows will substantially reduce the amount of time required to discover those issues, while advances in artificial intelligence (AI) should make it simpler to optimize cloud computing environments, said Yadalam. In fact, 86% of developers said they believe AI will enhance their ability to optimize costs within the next year.

Some organizations may take advantage of those capabilities to reduce costs, but more are likely to simply reallocate resources to other applications that they are unable to cost-effectively deploy, he added. Most organizations, for example, have a large backlog of applications that they are hoping to deploy, noted Yadalam. Those application deployment backlogs are only going to increase as developers rely more on AI coding tools to build applications faster, he added.

Less clear is to what degree organizations will set up centers of FinOps excellence versus simply providing application development teams with the tools needed to make better decisions. In the absence of any cost metrics, most application developers will provision the maximum of IT infrastructure they can access to ensure applications are always available. Unfortunately, that approach often results in idle cloud compute resources that organizations still need to pay for at the end of every month.

Ultimately, most organizations simply want to be able to more accurately predict costs that all too often can vary widely from one month to the next. The challenge and the opportunity to gain the visibility needed to rein in those costs without slowing down the pace at which modern applications are being deployed and updated at a time when more organizations than ever are dependent on software to succeed.


Share.
Leave A Reply