Levels of Autonomy in Software Development: Closing the Gap Between Creation and Confidence
When the automotive industry introduced the concept of Levels of Autonomy, it gave us a shared language for something profound. It wasn’t just about self-driving cars, it was about how humans and intelligent systems work together as execution gradually shifts from one participant to the other.
Level 0 is full human control. Level 5 means the car can handle any situation on its own. And between those two extremes are a series of stages that capture both technological progress and human adaptation.
The same shift is happening in software
We’re watching an autonomy curve unfold in software right now. AI coding assistants can already take a user story, generate code, fix bugs, and optimize performance in seconds. They’re not perfect – they still need oversight – but they’re undeniably moving up the autonomy ladder over time.
The tools that help us build software are advancing much faster than the tools that ensure we can trust it.
Coding assistants have reached Level 4 and 5 autonomy, while testing, security, documentation, and observability tools are still sitting closer to Level 2 or 3. They’re connected, but not yet intelligent in the same way. That imbalance matters, because we will lose the velocity benefits of agentic coding without equivalent acceleration in quality and security.
Closing the autonomy gap
When one part of the system evolves faster than the rest, risk emerges. With autonomous coding, teams can now generate and deploy software faster than they can test it. Autonomy without assurance becomes a liability.
Just as in the automotive space, where autonomous driving only became viable once safety systems – sensors, redundancies, and fail-safes – advanced in parallel, software development is no different. Coding autonomy will only reach its full potential when quality autonomy rises alongside it.
And the journey to quality autonomy won’t be linear. Every organization will move up this curve at its own pace.
I believe the shift will happen faster than most people expect. Agentic technologies don’t just accelerate work, they change how the work is done. Once teams experience the speed these systems deliver, there’s rarely a desire to go back.
What autonomy looks like in quality
As autonomy increases, the balance between human judgment and machine judgment changes.
- At Level 1, people build workflows, and technology repeats them.
- By Level 2, AI lends a hand when asked.
- At Level 3, AI starts to notice patterns and act on them: suggesting tests, self-healing where it can, and learning from feedback.
- By Level 4, people set tasks, and AI agents execute, reporting back only when something needs clarification.
- At Level 5, people define the outcome, and coordinated teams of AI agents plan, execute, and deliver it.
Imagine a CI/CD pipeline that diagnoses its own failures, identifies the likely cause, reruns only the impacted tests, and recommends a fix, all without waiting for human intervention. That’s the practical power of Level 4 and 5 autonomy.
At that ultimate level, AI isn’t just automating work, it understands intent. People define the goal, and AI determines the strategy and execution. In that world, AI isn’t an assistant, it’s a collaborator. It learns. It reasons. It adapts. And it scales quality at the pace of autonomous development.
Putting vision into practice
Partial autonomy isn’t enough. Coding and quality assurance need to advance together in their level of speed and intelligence, and the market is falling short of delivering this. Despite “autonomous” branding, most AI capabilities in quality tools require heavy human oversight, falling short of their claims. The autonomy ladder helps us view autonomy as a practical, measurable spectrum rather than a vibe.
That’s why our AI roadmap focuses on both today and tomorrow. Today, we’re providing tools that deliver practical value for teams at Levels 2, 3, and 4. We’re developing level 5 autonomy, where AI agents can plan, execute, and adapt with minimal human support.
The road ahead
Autonomous quality isn’t a distant vision – it’s emerging around us. If AI can build software at the speed of thought, the tools that guarantee its reliability must be just as intelligent, autonomous, and fast.
The next few years will be defined by those who close the autonomy gap first. Those who invest in intelligent, adaptive quality alongside development will not just move faster, they’ll shape the future of software delivery.