The AI market is at a funny crux; it’s never been more powerful, but it’s never been more overstated in the wrong ways. Every week brings another bold prediction about agents rewriting software development or AI becoming bigger than the industrial revolution. Yet when you zoom in on the day-to-day reality inside most engineering organizations, the mood is closer to what Atlassian Customer CTO Andrew Boyagi describes as “meh.”

If you’re now scratching your head, wondering, “Well, are there any real-life use cases for onboarding AI across my organization?” you’re in luck. With AI literacy still in its early stages, some of the grandiose claims fueled by hype have led tech leaders to misdirect their AI capabilities. This blog provides a course correction, enabling teams to experience productivity through seamless human-AI collaboration and drive tangible results.

AI gains are real when the focus is on augmentation over replacement

Most technology leaders can point to pockets of success with AI. A team that ships a feature faster. A developer who clears a backlog of documentation. A manager who writes better status updates in less time.

At scale, though, Atlassian’s research paints a more complicated picture. The State of Developer Experience 2025 report found that AI is saving time; however, much of that time is quietly lost again due to the way work is organized around code. The headline benefit looks good. But the net effect is often disappointing.

Boyagi highlights several data points from the report and from his conversations with leaders of large engineering orgs:

  • 68% of developers and 70% of managers report saving 10 or more hours per week with AI tools. That is more than a full workday reclaimed.
  • At the same time, 50% of developers report losing 10 or more hours per week to non-coding tasks driven by poor information access, fragmented tools, and constant context switching.
  • Developers spend around 16% of their time actually coding, which is already the fastest part of the software delivery process. The bottleneck is almost always in the other 84% of their week.

When teams complain about what slows them down, they rarely say, “I wish I could spend less time coding.” They usually talk about:

  • Too many low-value meetings that exist largely to share context or chase decisions
  • Unclear requirements that send them back to the product or design to clarify the scope
  • Manual steps in compliance, approvals, or release that are easy to forget and hard to track
  • Slogging through scattered documentation across tools and channels

Senior engineers are already efficient coders. If AI is only helping with code completion, the lift feels incremental at best. AI may speed up code generation, but the bigger blockers are buried in the way teams plan, decide, document, and coordinate.

Moving from code helpers to system-level improvements

Instead of asking “How do we get developers to use more AI in their editor,” leading teams ask “Where is most of our friction and how can AI help remove it across the entire system of work?”

In practice, that looks like:

  • Treating AI as a collaborator across the development lifecycle, not just as a code assistant
  • Using AI to clarify and structure requirements, summarize discovery calls, and flag gaps before work starts
  • Capturing decisions and tradeoffs in a place where AI can later retrieve and summarize them for new stakeholders
  • Automating status gathering, cross-team dependency checks, and recurring reporting so that humans spend more time on decisions and less on collation

In Atlassian’s own practice, this is reflected in Teamwork Collection. Jira, Confluence, Loom, and Rovo are used together as a single system of work. Developers still benefit from code-focused AI, but the larger gains come from reducing friction in the 84% of work that surrounds the code.

In a recent Atlassian webinar, The Modern Tech Leader’s Playbook for AI-Powered Teamwork, Boyagi described running a 20-person, multi-country project with no real-time meetings by combining Loom videos for async communication with Confluence pages for shared context. Discussions and decisions were recorded as they happened. AI could then summarize, extract action items, and keep everyone aligned without scheduling yet another cross-time zone call.


The lesson for tech leaders is straightforward: If your AI investments are limited to IDEs and not extended across the rest of your collaboration stack, you are likely leaving most of the value on the table.

Information overload is quietly taxing a quarter of the workweek

If wasted meetings are the visible cost of organizational friction, information overload is the invisible tax. It appears as extra pings in chat, people reading the same document multiple times, or a developer who spends an evening searching for a reference slide instead of shipping work.

Atlassian research found that knowledge workers and leaders spend around 25% of their time simply searching for answers. On software teams, half of the developers surveyed lose more than 10 hours a week searching for information.

The problem is rarely a lack of information. It is that knowledge is:

  • Scattered across tools like docs, tickets, slides, chats, and email
  • Stored in different systems owned by different teams with different norms
  • Poorly tagged, inconsistently structured, or captured only in someone’s head

Many organizations are layering AI on top of this sprawl in an ad hoc way, which often increases noise. If every tool gets its own assistant with its own view of the world, employees end up with more chat windows to consult and more answers to reconcile.

What it looks like to tame information overload

To explore an example from here at Atlassian, Boyagi was preparing for a CIO meeting and needed a GIF of a specific, unreleased product feature that a customer had asked about. He could not recall the feature’s name, who had shown it to him, or where it lived. It was 9:30 pm, which meant both his US and Australian colleagues were offline. His first instinct was to ping a team and hope for an answer by morning.

Instead, he asked Rovo a natural language question describing what the feature did. Within about 30 seconds, it:

  • Found the exact GIF buried in a Google Slides deck
  • Surfaced related Confluence pages that described why the feature existed and how it worked

He got what he needed without waking anyone up or burning hours of his own time searching. The only reason this worked is that the organization’s knowledge was already captured in Confluence, Jira, slides, and Loom, and Rovo was indexing all of it as a single graph of work.

Atlassian Head of Product Marketing Tammy Lam shared a similar story. Her team was planning a product launch and asked Rovo about the impacts on their timeline. An assistant flagged a regulatory effort owned by compliance that the launch squad had not yet been aware of. From there, Rovo:

  • Provided a summary of the relevant regulation and its implications for launch
  • Recommended a specific person on the compliance team to speak with, based on their Jira and Confluence activity, rather than sending Tammy to a static directory

This is a useful pattern for leaders trying to reduce information overload:

  • Centralize knowledge wherever possible, so that AI can access it in a single system of work instead of in disconnected silos
  • Use AI not just to “search,” but to distill and prioritize what is relevant for a given decision or role
  • Connect human expertise to machine discovery by recommending who to talk to, not just which page to read

Customer examples in the same webinar echo this shift. SproutSocial uses Loom and Rovo to streamline onboarding and sprint planning. Datasite leans on Confluence, Loom, and Rovo to share processes and keep leaders out of back-to-back meetings.

In both cases, the value comes less from any one tool and more from making knowledge discoverable, summarized, and reusable across the organization.

When teams can trust that answers are findable and contextual, they interrupt each other less and ship more. Leaders who want AI to reduce chaos rather than add to it should focus as much on knowledge architecture as on model selection.

AI only scales when leaders change their own habits first

The impact of AI is not primarily a tooling problem. It is a leadership and behavior problem, especially when it comes to how visibly leaders champion and personally use AI in front of their teams.

On the one hand, workers who use AI report productivity gains of around 33% for certain tasks. On the other hand, 96% of companies report that they are not experiencing an AI-driven transformation at the organizational level. Only about 4% describe meaningful, company-wide impact.

The difference comes down to how AI is adopted, coordinated, and role modeled. Quiet endorsement is not enough. Teams get the biggest benefits when leaders repeatedly show how they use AI themselves, narrate those choices in all-hands, one-on-ones, and Loom updates, and make AI collaboration feel like a normal, expected part of work rather than a side experiment.

Atlassian’s research shows that:

  • Employees who are encouraged by their leaders to experiment with AI save 55% more time per day than those who are not
  • Only 3 to 4% of leaders report having witnessed true organizational transformation from AI so far, despite the growing use of AI at the individual level

For Boyagi, strategic AI collaborators are teammates who treat AI less like a simple assistant and more like a teammate they can spar with. They ask why, probe for alternatives, and fold AI into their workflows for design decisions, reviews, and learning.

Compared to “simple AI users” who mostly apply AI to narrow tasks like code completion, strategic collaborators:

  • Save around 105 minutes per day, compared to about 53 minutes
  • Report a 90% improvement in the quality of their work

Leadership support is one of the biggest predictors of who becomes a strategic collaborator. The shift occurs when managers do more than simply state that AI is important. It happens when they openly show their Rovo prompts in a meeting, walk through the Loom they recorded instead of scheduling a live update, and explain how an AI summary changed a decision they were making.

When managers openly share how they use AI, discuss what works and what does not, and signal that experimentation is expected, reluctance decreases, and the quality of use improves. When they do this consistently and publicly, AI stops being something that only a few early adopters experiment with and becomes an integral part of how the whole team works.

What leader-led AI adoption looks like in practice

When a manager regularly explains their use of AI within their own workflows to their team, it encourages them to adopt similar approaches. Instead of pushing the idea of AI productivity, it’s important to demonstrate it using specific chats, prompts, and workflows, while making sure to talk through learnings when something did not work as expected.

Seeing AI in practice does two things.

  • It normalizes the use of AI for more complex, judgment-heavy tasks, not just for low-stakes chores
  • It provides the team with concrete, contextual starting points tied to their actual work

Example of AI for devs: The Rovo Dev AutoReview Agent

Within Atlassian’s engineering organization, leaders model AI use themselves and challenge teams to run real experiments that solve concrete bottlenecks, such as the internal Rovo Dev AutoReview Agent.

Teams were frustrated by long pull request cycle times. Rather than accepting it as a given, leaders explicitly challenged teams to explore where AI could be beneficial, shared their own experiences using AI for code reviews and research, and provided engineers with the time and support to conduct real experiments.

The resulting AutoReview agent:

  • Reviews code changes and flags potential issues
  • Suggests improvements before a human reviewer even starts

The internal impact has been substantial. Atlassian reports over a 45% reduction in PR cycle time. AutoReview is not just a clever tool. It is the outcome of leaders asking, “Where does AI fit into our real bottlenecks?” and using AI in their own technical reviews, then highlighting successful experiments so they can be shared.

At Atlassian, leadership behavior around AI extends beyond engineering; many sales leaders use a “customer 360” agent before calls to understand a customer account. Executives also rely on AI-generated summaries rather than bespoke decks to explain their decisions and business trade-offs in Loom updates and status notes.

These leaders not only save their own time but also model a way of working that makes AI a shared, trusted part of the system.

When tech leaders talk openly about how they use AI and connect it to business outcomes – rather than treating it as just for individual contributors – organizations transform AI from isolated features into a valuable, repeatable way of working that others can copy and build on.

Bringing it all together: Build a system of work that lets AI thrive

AI on its own will not fix broken collaboration. To translate individual time savings into team-level and company-level gains, tech leaders need to:

  1. Target the 84% of work that happens outside of coding. Meetings, requirements, approvals, documentation, and cross-team alignment are where most friction lives.
  2. Treat information overload as a structural problem, not just an individual focus problem: centralize knowledge and use AI to connect, summarize, and route it, rather than creating more parallel streams.
  3. Lead from the front on AI adoption, sharing real use cases, framing AI as a teammate, and aligning experiments with specific outcomes like cycle time or onboarding speed.

This is where a cohesive system of work becomes important. Atlassian’s own approach utilizes Teamwork Collection to integrate Jira, Confluence, Loom, and Rovo into a single, connected stack, enabling AI to view, summarize, and act across the entire work lifecycle. That is what enables stories like AI-assisted launches that catch unseen risks, or global projects run with almost no live meetings.

Used thoughtfully, AI can absolutely move your teams from “meh” to meaningful, measurable momentum. The difference will not come solely from the models. It will come from how you design your system of work and how you choose to lead.

Share.
Leave A Reply