Wednesday, January 21, 2009

Orthogonal Defect Classification (ODC)

Improving software testing via ODC

Orthogonal Defect Classification (ODC)
is a methodology used to classify software defects. When combined with a set of data analysis techniques designed to suit the software development process, ODC provides a powerful way to evaluate the development process and software product. Software systems continue to grow steadily in complexity and size. The business demands for shorter development cycles have forced software development organizations to struggle to find a compromise among functionality, time to market, and quality. Lack of skills, schedule pressures, limited resources, and the highly manual nature of software development have led to problems for both large and small organizations alike. These problems include incomplete design, inefficient testing, poor quality, high development and maintenance costs, and poor customer satisfaction. As a way to prevent defects from being delivered, or “escaping,” to
customers, companies are investing more resources in the testing of software. In addition to improving the many other aspects of testing (e.g., the skill level of testers, test automation, development of new tools, and the testing process), it is important to have a way to assess the current testing process for its strengths and weaknesses and to highlight the risks and exposures that exist. Although it is well documented that it is less expensive to find defects earlier in the process and certainly much more expensive to fix them once they are in the field,1 testers are not usually aware of what their specific risks and exposures are or how to strengthen testing to meet their quality goals.

the use of Orthogonal Defect Classification (ODC),2 a defect analysis technique, to evaluate testing processes. Three case studies are presented.

ODC deployment process. The process for deploying ODC has evolved over the last 10 years. However, the following basic steps are critical in order for the ODC

deployment to be successful:
* Management must make a commitment to the deployment of ODC and the implementation of actions resulting from the ODC assessments. · The defect data must be classified by the technical teams and stored in an easily accessible database.
* The classified defects are then validated on a regular basis to ensure the consistency and correctness of the classification.
·Once validation has occurred, assessment of the data must be performed on a regular basis. Typically, the assessment is done by a technical person who is familiar with the project and has the interest and skills for analyzing data. A user-friendly tool for visualizing data is needed.
* Regular feedback of the validation and assessment results to the technical teams is important. It improves the quality of the classification. It also provides the teams with the necessary information so that they can determine the appropriate actions for improvement. This feedback is also important in obtaining the necessary commitment from the technical teams. Once they see the objective, quantified data, and the reasonable and feasible actions that result, commitment of the teams to the ODC process usually increases.
* Once the feedback is given to the teams, they can then identify and prioritize actions to be implemented.

When this ODC process has been integrated into the process of the organization, the full range of benefits can be realized. The development process and the resulting product can be monitored and improved on an ongoing basis so that product quality is built in from the earliest stages of development.

Classification and validation of defects. The classification of the defects occurs at two different points in time. When a defect is first detected, or submitted, the ODC submittor attributes of activity, trigger, and impact are classified.
* Activity refers to the actual process step (code inspection, function test, etc.) that was being performed at the time the defect was discovered.
* Trigger describes the environment or condition that had to exist to expose the defects.
* Impact refers to either perceived or actual impact on the customer. When the defect has been fixed, or responded to, the ODC responder attributes, which are target, defect type, qualifier, source, and age, can be classified.
* Target represents the high-level identity (design, code, etc.) of the entity that was fixed.
* Defect type refers to the specific nature of the defect fix. · Qualifier specifies whether the fix that was made was due to missing, incorrect, or extraneous code or information.
* Source indicates whether the defect was found in code written in house, reused from a library, ported from one platform to another, or outsourced to a vendor.
* Age specifies whether the defect was found in new, old (base), rewritten, or refixed code.

Typically, the ODC attributes are captured in the same tool that is used to collect other defect information with minor enhancements. Two methods are used to validate data. The individually classified defects can be reviewed for errors by a person with the appropriate skills. This may be needed only until the team members become comfortable with the classification and its use. It is also possible to use an aggregate analysis of data to help with validation. Although this method of validation is quicker, it does require skills beyond classification. In order to perform a validation using this method, the validator reviews the distribution of defect attributes. If there are internal inconsistencies in the information contained in the data or with the process used, it points to potential problems in the quality of the data, which can be addressed by a more detailed review of the subset of defects under question. Even in cases where there is a misunderstanding by a person in the classification step, it is typically limited to one or two specific aspects, which can be clarified easily. Once the team understands the basic concepts and their use, data quality is no longer a problem.

Data assessment : Once the data have been validated, the data are then ready for assessment.6,7 When doing an assessment, the concern is not with a single defect as is done with causal analysis.8 Rather, trends and patterns in the aggregate data are studied. Data assessment of ODC classified data is based on the relationships of the ODC attributes to one another and to non-ODC attributes such as component, severity, and defect open date. For example, to evaluate product stability, the relationships among the attributes of defect type, qualifier, open date, and severity of defects might be considered. A trend of increasing “missing function” defect type or increasing high-severity defects may indicate that the product stability is decreasing.

No comments:

Post a Comment