Webinar Q&A - 9 Characteristics of Agile Methodologies That Turbo-Charge Testing

We thank everyone who joined us for Rex Black's presentation on the 9 Characteristics of Agile Methodologies That Turbo-Charge Testing which is now available on-demand for those who may have missed it. Below, Rex has answered some of your questions that we did not have time for during the live event. We hope you find it valuable. Please feel free to ask any additional questions in the comments.

Q: A lot of times user stories get rolled over to next sprints because development doesn't finish it off by then or the effort wasn't properly estimated. This results in QA getting swamped at the end. How do you prevent this from happening as a tester? What kind of checks do you need from time to time to prevent this?


Yes, this is a problem that we have seen with a number of clients that have implemented Agile, to a greater or lesser extent. I talked about this issue in my companion presentation, Agile Testing Challenges, which you can find here. One way to mitigate the damage is by using risk based testing to focus the limited, time-compressed test effort only on the most important things.

By the way, this is an example of a situation where Agile implementation is the problem, not Agile itself. The Agile best practice of proper estimation (including the testing effort) of properly sized user stories, fit into sprints based on well understood team velocities, reduce the incidence of such events in proper Agile implementations. Another Agile best practice, that of sustainable work load, means that Agile teams do not crunch the testers at the end of the sprint, but rather push the incomplete work into the next sprint.

That said, you can't stop people from wanting what they want, when they want it. You can't change human nature with a software lifecycle model. Fred Brooks commented on this issue in his book, The Mythical Man Month, written about his experiences in the 1960s building software for IBM.

Q: What is the typical length of a sprint? Here, we try to get one every week. Isn't it overwhelming to fetch user stories every sprint? The reality is that we can't get the info as much as we want...


That does seem very fast. Our clients who are using Agile have sprints between two and four weeks.

Q: How can we deal with the fact that the application is constantly evolving and our automated test always needs to be updated. We found out that it is very time consuming to always update the automated test.


Yes, this is another one of the testing challenges of Agile that I discussed in my companion presentation. You might want to have a professional test automation expert come in to evaluate your automation strategy and what might be improved to make it more maintainable. Different tools and strategies have different strengths and weaknesses in terms of maintainability.


I would definitely recommend that you take full advantage of automation at the unit test and component integration test levels, with APIs. These tend to be much more stable than user interfaces.

Q: Agile talks about reducing time delivery and tasks, we used to do before. How we can prove that for testing we are having some tasks reduced or eliminated? We still need to define test plan, test cases and also document evidences and defects.


Yes, I agree, it is hard to find reliable metrics that demonstrate the productivity and quality gains claimed by Agile advocates. I know that Capers Jones has begun to gather this information, but it's still early in the process.


In terms of benchmarking how you're doing in terms of productivity, it depends on what information you have available. You can look at your average cost of testing per coverage item (e.g., requirements, user stories) prior to adopting Agile and now, provided that the relative size of the coverage items is the same. You can look at the average number of defects per developer-day (i.e., divide the total number of defects found in each sprint by the total number of person-days of developer effort on the sprint) and see how that compares with the pre-Agile numbers to get a sense of quality, provided that the quality of the delivered software is about the same as before.

Q: What is the difference between statement and branch coverage?


In every programming language I'm aware of, there is a syntactical concept of an executable statement. Statement coverage refers to the percentage of executable statements that have been tested. Based on certain decision constructs in the code (e.g., IF statements, SWITCH/CASE constructs, loops, etc.), branches exist in programs. At a decision point, the control flow splits into two or more branches as program execution continues is path through the executable statements. Branch coverage-also called decision coverage-refers to the percentage of branches that have been tested.

Q: Can you elaborate more on the topic of false positive and false negative associated with testing?


Yes. A false positive is a situation where the tester sees a discrepancy between the actual and the expected results of a test and therefore reports a bug, but in reality that discrepancy is not due to a bug in the code. Instead, it is due to a problem in the test environment, a problem with the test data, a problem with the test case, a problem with the requirement or user story upon which the expected result was based, or a problem with the tester's expectations. In a well-functioning test process, the rate of false positives should be 5% or less.


A false negative is a situation where the tester runs a test which does reveal the presence of a bug, but the tester does not notice a discrepancy. This can occur due to similar problems as mentioned above, plus testability problems with the application. Note that not every bug that is found in production necessarily means that a false negative occurred in testing.


We do have clients that are following Agile lifecycles and using tools like Quality Center, Rational Test Manager, and QAComplete. While not all of these tools were designed with Agile in mind, they were designed with testing in mind. We find it's much better to use testing tools to manage tests, traceability, and defects, than to use tools like Rally to manage these test-related items. There's nothing wrong with Rally as a tool to manage sprints-we have clients that use it and are very happy with it-but it's not a testing tool and doesn't really do a good job of managing the test-related information.

Q: Can you suggest any tools to measure and manage technical debt?


In terms of defect-related technical debt, I like to look at the bug backlog (number of outstanding reports in the bug tracking system) and at bug closure period (the average time from discovery to resolution of a bug). To have access to these metrics, you have to track every bug that's found during testing.Some Agile advocates discourage this, saying you should only track bugs that aren't fixed in the same sprint they were discovered, but in my experience that results in a severe loss of visibility into the quality capability of the software engineering process.

Q: Any chance you'll share one or two of the "many tools are available, including open source tools" that you show in EVERY category?


The collection of available tools is evolving too rapidly for anyone to keep up, or for any answer that is correct today to be correct tomorrow. Just today, I heard Bob Payne on his Agile Toolkit Podcast be stumped by someone he was interviewing who mentioned a new test tool that he hadn't heard of before. Fortunately, a few clicks of the mouse while using your favorite search engine will probably get you a complete, current list!

Q: Besides creating smaller stories and pulling in others to help, what other steps can testers take to avoid the "TEST CRUNCH"??


Pulling in others to help do the testing is really not a great option, because those "others" are often not professional testers. They don't test as completely as a professional tester would, they don't have good skills in terms of bug reporting, and they might have interests that make objective testing hard for them. This end-of-sprint test crunch is really a matter of management maturity. The solution involves better estimation and holding to the sustainable workload. I think stabilization sprints - as maligned as those are by Agile purists - are also part of the solution.

As a tester, though, those things are not really in your control. Risk based testing, by giving you a way of focusing on the important items and a way of triaging tests when time gets tight, does offer a solution for the testing part of the problem.

Q: Both Static Analysis and Dynamic Testing have their own significance in the Agile world. However, which one of these in your opinion will be more valuable or should have more priority in situations when you have limited time and need prioritize activities?


I'm not sure I agree with the premise of the question, because it seems to say, "We don't have time to do it right, but we have time to do re-work and fix bugs." However, if you're asking me, "Which activities have the highest defect detection effectiveness, and thus should be put in place first?" Then I can answer that.Good system testing by independent test teams generally finds 85% of the defects delivered to it, so it's the most powerful. Peer reviews of requirements and code - when conducted according to proven best practices - can find 65% of the defects present or more, so that's probably the next most important. Automated unit testing, using peer-reviewed unit tests that achieve 100% statement and branch coverage, appear to find about 50% of the defects present in the code tested, so it's number three. For static analysis of the code, i'm not sure how this ranks, but, since it can easily be incorporated into an automated unit testing framework, why not do it?

Q: By "know how to do automation," do you mean coding automation programs of our own?


Oh, no, I would avoid that unless a tool evaluation showed that there are literally no commercial or freeware tools available that would work for your specific test problem.

Q: Besides SmartBear, what TCM software do you think does a decent work?


We have clients using a variety of test management tools. The most important thing I would say is to reiterate the point I made earlier, which is to use actual test tools to manage testing. Most of the tool-related test management issues we see with clients arise when clients are using non-test tools to manage testing.


Close

Add a little SmartBear to your life

Stay on top of your Software game with the latest developer tips, best practices and news, delivered straight to your inbox

By submitting this form, you agree to our
Terms of Use and Privacy Policy

Thanks for Subscribing

Keep an eye on your inbox for more great content.

Continue Reading