Webinar Q&A - The State of Software Quality in 2011 with Capers Jones

Thank you again to those who joined Capers Jones on The State of Software Quality 2011 Webinar, which is now available on-demand for those who may have missed it. Additionally, Capers has taken the time to answer some of your questions that we did not have time for during the live event, below. We hope these are helpful, and spur further discussion around software quality.

What is the definition of function points?

Function points are the weighted combination of five parameters: 1) inputs; 2) outputs; 3) inquiries; 4) logical files; 5) interfaces.  There are many books and articles about function points.  Check out the book Function Point Analysis by David Garmus and David Herron, Addison Wesley.

In terms of defect categories, would you class "Scope Creep" under "Requirements" defects?

Scope creep is not classified as a defect.  Sometimes scope creep is due to unavoidable business or policy changes such as a change in tax law, a change in state law, or mergers and acquisitions which impact software under development.  That being said, scope creep averages between 1% and 4% per calendar month.

Any recommendations on how to improve the CEO's perception that software is the necessary evil and to buy into the fact that requirements/design gathering are actually the key issues with feature creep being typical causes of delays.

From working as an expert witness in 15 lawsuits, the main reasons for delay are:  1) Poor estimates at the beginning; 2) Poor quality control during development so testing takes longer than planned; 3) Scope creep; 4) Poor tracking of progress so clients and higher managers are kept in the dark.

Code inspection appeared to have the highest impact on defect removal. What have you seen as the most effective ways for code inspection?

Both the IBM Fagan approach the Gilb approach are about equal. Informal peer reviews are less effective than formal inspections.

How do you measure quality defects in requirements & design (as in tracking?)

Requirements defects average about 1 per function point or two per page.  Design defects average about 1.25 bugs per function point or 3 per page.  There are tools and methods to reduce these values.  Try joint application design, quality function deployment, and model-driven requirements methods.

What can test specialists do to add value to help companies get out of the zone of chaos?

Testing with test specialists in addition to developer testing adds about 10% to test defect removal efficiency.

How do you validate estimated Function Points created at project start with actual Function Points delivered at end of project.

Normally function points are counted at the end of requirements and then again at delivery.  If you include predictions for requirements creep the two counts should be equal.  If you ignore requirements creep the second count will be much larger than the first.

The slide "Defects and Software Methodologoies" sic seems to imply that Agile is a better delivery methodology than Waterfall...This seems to be misleading, thoughts? 

Boehm and Turner argue that each has its "Sweet Spot" and though one has perhaps a better defect delivery rate, that does not mean one is better than another.

Neither Agile nor waterfall have very big sweet spots.  If you measure total cost of ownership here are the rankings for 10 methods:

  1. Team Software Process (TSP)
  2. CMMI 5 with spiral
  3. Extreme programming (XP)
  4. RUP
  5. Agile
  6. Object Oriented
  7. CMMI 3 with iterative
  8. Pair programming with iterative
  9. Proofs of correctness with waterfall
  10. CMMI 1 with waterfall

Do you have any recommendations on how to record defect discovery removal early in development? Most defects that are fixed early are not recorded, so determining where defects are discovered and fixed is very challenging.

At IBM and many other major companies defects are recorded from the day the project starts and never ends until the last user stops using the software, sometimes 30 years later.  Incomplete defect measures are a chronic problem among IT groups.

When you speak of DRE's and you realize that you are below where you wish to be, what are the next steps to improve? In other words how do you find out where your deficiencies are?

Testing by itself is not sufficient to climb above 85% in DRE.  If you use pre-test inspections and static analysis you can go above 95%.  If you add mathematical test case design you can go above 97%.  If you use certified testers in addition to developers you can hit 99%.

Could you explain what quality function deployment is and how it prevents defects?

QFD is a method that originated in Japan for manufacturing.  It is a special series of meetings between clients and developers using special graphics that illustrate risks and quality problems and how the developers plan to correct or eliminate those problems.  One of the diagramming methods is called “the house of quality” because risks are on one side of the roof and solutions on the other.  Do a google search on QFD to see samples of the graphs.

What types of defects are listed as "Web Site Defects?" How different than code or data defects?

Web site defects are things like overlapping graphics or text; failure to allow users to back up to correct mistakes in data entry; providing or omitting contact information.  Some of these can be caused by code others by design but they show up on the web sites.

Any opinions about state of Software Testing as a discipline? Seems industry is moving towards LESS QA/QC staff.

Lots of people have lost their jobs due to the recession.  QA and testers are about the same as other software personnel.  Trainers and technical writers probably lost more jobs than test and QA.

I doubt my team will ever use function points - is it "bad" to use LOC as the basis of measurement with the understanding that there will be some tradeoffs?

Unfortunately LOC metrics make requirements and design bugs invisible.  Worse, they penalize high-level languages and make low-level languages seem better than they are.  I regard LOC metrics as professional malpractice because they violate standard economic rules and generate incorrect data.  You will mainly get incorrect information about both productivity and quality if you use LOC.

Along the lines of preventative action, what stats do you have on defect forecasting and accuracy?

Several commercial software estimation tools such as KnowledgePlan, SEER, and Software Risk Master have very powerful defect prediction capabilities.  Predicting defects is mandatory for accurate estimates because finding and fixing bugs is the most expensive software cost driver.

Any metrics on the effectiveness of the practice of test-driven development at the unit test level and acceptance test level?

There is data on about 40 kinds of testing and 25 kinds of pre-test defect removal.  Unit test under TDD is about 40% efficient; sometimes 45%.  Acceptance test is only about 35% efficient or finds one bug out of three.  This was published in my latest book The Economics of Software Quality. Testing alone is not sufficient for high quality.  You also need pre-test inspections, static analysis, and mathematically based test case design to top 97% in defect removal efficiency.  TDD alone is not enough without up front inspections.


Close

By submitting this form, you agree to our
Terms of Use and Privacy Policy

Thanks for Subscribing

Keep an eye on your inbox for more great content.

Continue Reading

Add a little SmartBear to your life

Stay on top of your Software game with the latest developer tips, best practices and news, delivered straight to your inbox