Some Agile teams report that it is difficult for their testers to keep up with their developers. Some are even proud of this. They view it as a good problem, a sign of increasing agility. “Look how fast our developers are! We’re developing code faster than it can be tested!”
Sorry folks, but I have some bad news. If you’re in this situation, you’re not nearly as Agile as you think you are.
Let’s take the case of a Scrum team. The team had adopted all the Scrum practices including short Sprints, the Sprint planning meeting, a Product Owner that controlled the Product Backlog, and the daily Scrum meetings. As happens with many Scrum teams, this team struggled to test everything within the Sprint. The testers felt squeezed, forced to fit a whole lot of testing into the last few days of the Sprint.
The team decided to relieve the pressure on the testers by moving the test effort into the next Sprint. So the features developed in Sprint 1 would be tested in Sprint 2. The features developed in Sprint 2 would be tested in Sprint 3. During each Sprint, the developers worked on the new features while the testers tested the features already developed.
This worked for a while, but then the team discovered they didn’t know what to do about the bugs that the testers found. Should the programmers be pulled off their Sprint-related tasks to fix the bugs? Or should the bugs go on the Product Backlog? The team wrestled with these questions.
The team had identified QA/Test as a bottleneck. They attempted to address the issue by moving the bottleneck downstream. But moving a bottleneck downstream doesn’t do anything to address an imbalance in a system. It just moves the problem. And in this case, the feedback latency became much too long, and that introduced new problems. The programmers didn’t learn about problems in their code until weeks after they thought it was “done.” Along the way, technical debt was accruing. It’s a classic case of impedance mismatch.
To understand how to fix this problem, we need to borrow a key concept from The Theory of Constraints (see Goldratt’s The Goal). The project can only move as fast as the slowest part of the process. Increasing the efficiency of one part of the process (development) without addressing a bottleneck in another part of the process (QA/Test) doesn’t help. In fact, it might even make things worse. When features pile up, waiting to be tested, they’re the software equivalent of extra inventory in manufacturing. All that extra inventory is just getting in the way and distracting the team.
What could we do instead?
We could attempt to eliminate the bottleneck. Some organizations try to eliminate the QA bottleneck by increasing capacity, either by adding an army of testers or outsourcing testing to large companies that can turn around test results quickly. Frankly, I don’t recommend this approach. Having too many people in QA is, in my experience, far worse than having too few. (See my articles Better Testing, Worse Quality and Better Testing, Worse Testing to understand some of the reasons why.) And outsourcing is hardly a panacea.
I think the better solution in this case involves integrating the team and defining “Done” within a Sprint as both coded and tested. In practice, this means that:
QA/Test participates in the Sprint Planning Meeting. Their role in that meeting is to ask “what if?” questions early, essentially testing the requirements as they’re being fleshed out, and identify testing tasks that will have to be accomplished in order to consider each feature “Done.”
Testing tasks are included in the Sprint plan. There are numerous tasks associated with testing, like configuration setup, data creation, and test automation. When testing tasks are part of the Sprint plan, they’re visible to all, and the whole team is responsible for getting them done.
Hands on testing begins the minute there’s code checked in and available to test. This is easier said than done. Developers may need to create little custom rigs, like faked up data entry forms, to support testing of features that aren’t completely usable yet. And testers may have to learn how to deal with incomplete code that’s missing little things like a user interface.
The team deals with bugs immediately. The Product Owner may decide that some bugs are actually new features that need to be put on the Product Backlog. And the Product Owner may decide to accept a feature with a non-critical bug or two. But the team works together to avoid “Broken Windows.”
I’m not suggesting that these steps are easy or that they’ll be met with universal approval. The programmers may chafe at the need to support the test effort. Some may complain, “We could be getting so much more done if QA could just take care of all these testing tasks!” Similarly, the testers may balk at having to test unfinished code. “It’s too early to test!” they may object. “Why can’t we just wait until they think they’re done with a feature?”
I understand those objections, just as I understand the forces that resulted in separating the development from the testing. It is certainly tempting to let the programmers code as fast as they can, and let the testers work with “finished” code in the next sprint. Tempting, that is, until you realize that “test in the next sprint” is just a nice way of saying “code-and-fix.” And the mere presence of a Scrum Master cannot prevent the quality problems, constant crises, unpredictable schedules, technical debt, and general pain brought about by code-and-fix practices.
The bottom line: no matter what Agile methodology you adopt, if it becomes just a nice way of saying “code-and-fix,” it’s not Agile.