Wednesday, January 21, 2009
TESTING STRATEGIES
Testing is the process of finding defects in relation to set of predefined criteria.There are 2 forms of testing.
White box testing
Black box testing
An ideal test environment alternates whitebox and blackbox testing activities,first stabilizing the design,then demonstrating that it performs the required functionality in a reliable manner consistent with performance,user and operational constraints.
White box testing is conducted on code components which may be software units,computer software components or computer software configuration items.
White Box testing:
White box testing of “web server “was carried out with the following points in mind.
• Each statement in a code component was executed at least once
• Each conditional branch in the code component are executed
• Execution of paths with boundary and out-of bounds input values were carried out.
• Verification of the integrity of internal interfaces was done
• Verification of architecture integrity across a range of conditions were provided
• Verification of database design and structure was also carried out.
White box test has verified that the software design is valid and that it was built according to the RAD design. White box testing has traced the configuration management (CM)-controlled design and internal interface specifications. These specifications have been identified as an integral part of the configuration control process.
Black box testing:
Black box testing was conducted on integrated, functional components whose design integrity has been verified through completion of traceable white box tests. As with white box testing, these components include software units, CSCs, or CSCIs. Black box testing traces to requirements focusing on system externals. It validates that the software meets
requirements without regard to the paths of execution taken to meet each requirement. It is the type of test conducted on software that is an integration of code units.
The black box testing process includes:
• Validation of functional integrity in relation to external servlet input.
• Validation of all external interfaces conditions
• Validation of the ability of the system, software, or hardware to recover from the effect of unexpected or anomalous external or environmental conditions
• Validation of the system’s ability to address outof-bound input, error recovery, communication,and stress conditions.The try catch blocks are provided for these actions.
Black box tests on ‘web server”has validated that an integrated software configuration satisfies the requirements contained in a CM-controlled requirement
Ideally, each black box test should be preceded by a white box test that stabilizes the design.
The levels of test include:
Level 0—These tests consist of a set of structured inspections tied to each product placed under configuration management. The purpose of Level 0 tests is to remove defects at the point where they occur, and before they affect any other product.
Level 1—These white box tests qualify the code against standards and unit design specification. Level 1 tests trace to the Software Design File (SDF) and are usually
executed using test harnesses or drivers. This is the only test level that focuses on code.
Level 2—These white box tests integrate qualified CSCs into an executable CSCI configuration. Level 2 tests trace to the Software Design Document (SWDD). The
focus of these tests is the inter-CSC interfaces.
Level 3—These black box tests execute integrated CSCIs to assure that requirements of the SoftwareRequirements Specification (SRS) have been implemented and that the CSCI executes in an acceptable manner. The results of Level 3 tests are reviewed and approved by the acquirer of the product.
Level 4—These white box tests trace to the SystemSubsystem Design Document (SSDD). Level 4 tests integrate qualified CSCIs into an executable systemconfiguration by interfacing independent CSCIs and then integrating the executable software configuration
with the target hardware.
Level 5—These black box tests qualify an executable system configuration to assure that the requirements of the system have been met and that the basic concept of the system has been satisfied. Level 5 tests trace to the System Segment Specification (SSS). This test level usually results in acceptance or at least approval of the system for customer-based testing.
Level 6—Level 6 tests integrate the qualified system into the operational environment.
Level 7—These independent black box tests trace to operational requirements and specifications.
Level 8—These black box tests are conducted by the installation team to assure the system works correctly when installed and performs correctly when connected
to live site interfaces. Level 8 tests trace to installation manuals and use diagnostic hardware and software.
BASIC TESTING CONCEPTS
Testing is no longer considered a stand-alone and end of-the-process evolution to be completed simply as an acquisition milestone. Rather, it has become a highly
integral process that complements and supports other program activities while offering a means to significantly reduce programmatic risks. Early defect identification is possible through comprehensive testing and monitoring. Effective solutions and mitigation strategies emerge from proactive program management practices once risks have been identified.
Examples of Architectural Styles / Patterns
There are many common ways of designing computer software modules and their communications, among them:
· Blackboard
· Client-server
· Distributed computing
· Event Driven Architecture
· Implicit invocation
· Monolithic application
· Peer-to-peer
· Pipes and filters
· Plugin
· Representational State Transfer
· Structured (module-based but usually monolithic within modules)
· Software componentry (strictly module-based, usually object-oriented programming within modules, slightly less monolithic)
· Service-oriented architecture
· Search-oriented architecture
· Space based architecture
· Shared nothing architecture
· Three-tier model
Regression testing - types - uses
Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged.Experience has shown that as software is developed, this kind of reemergence of faults is quite common. Sometimes it occurs because a fix gets lost through poor revision control practices (or simple human error in revision control), but often a fix for a problem will be "fragile" in that it fixes the problem in the narrow case
where it was first observed but not in more general cases which may arise over the lifetime of the software. Finally, it has often been the case that when some feature is redesigned, the same mistakes will be made in the redesign that were made in the original implementation of the feature.
Types of regression
· Local - changes introduce new bugs.
· Unmasked - changes unmask previously existing bugs.
· Remote - Changing one part breaks another part of the program. For example, Module A writes to a database. Module B reads from the database. If changes to what Module A writes to the database break Module B, it is remote regression.
There's another way to classify regression.
· New feature regression - changes to code that is new to release 1.1 break other code that is new to release 1.1.
· Existing feature regression - changes to code that is new to release 1.1 break code that existed in release 1.0.
Mitigating regression risk
· Complete test suite repetition
· Regression test automation (GUI, API, CLI)
· Partial test repetition based on traceability and analysis of technical and business risks
· Customer or user testing
oBeta - early release to both potential and current customers
oPilot - deploy to a subset of users
oParallel - users use both old and new systems simultaneously
· Use larger releases. Testing new functions often covers existing functions. The more new features in a release, the more "accidental" regression testing.
· Emergency patches - these patches are released immediately, and will be included in future maintenance releases.
Uses
Regression testing can be used not only for testing the correctness of a program, but it is also often used to track the quality of its output. For instance in the design
of a compiler, regression testing should track the code size, simulation time and compilation time of the test suite cases.
Purpose Of Integration Testing
interface. Test cases are constructed to test that all components within assemblages interact correctly,
for example across procedure calls or process activations, and
this is done after testing individual modules, i.e. unit testing.
Orthogonal Defect Classification (ODC)
Orthogonal Defect Classification (ODC) is a methodology used to classify software defects. When combined with a set of data analysis techniques designed to suit the software development process, ODC provides a powerful way to evaluate the development process and software product. Software systems continue to grow steadily in complexity and size. The business demands for shorter development cycles have forced software development organizations to struggle to find a compromise among functionality, time to market, and quality. Lack of skills, schedule pressures, limited resources, and the highly manual nature of software development have led to problems for both large and small organizations alike. These problems include incomplete design, inefficient testing, poor quality, high development and maintenance costs, and poor customer satisfaction. As a way to prevent defects from being delivered, or “escaping,” to
customers, companies are investing more resources in the testing of software. In addition to improving the many other aspects of testing (e.g., the skill level of testers, test automation, development of new tools, and the testing process), it is important to have a way to assess the current testing process for its strengths and weaknesses and to highlight the risks and exposures that exist. Although it is well documented that it is less expensive to find defects earlier in the process and certainly much more expensive to fix them once they are in the field,1 testers are not usually aware of what their specific risks and exposures are or how to strengthen testing to meet their quality goals.
the use of Orthogonal Defect Classification (ODC),2 a defect analysis technique, to evaluate testing processes. Three case studies are presented.
ODC deployment process. The process for deploying ODC has evolved over the last 10 years. However, the following basic steps are critical in order for the ODC
deployment to be successful:
* Management must make a commitment to the deployment of ODC and the implementation of actions resulting from the ODC assessments. · The defect data must be classified by the technical teams and stored in an easily accessible database.
* The classified defects are then validated on a regular basis to ensure the consistency and correctness of the classification.
·Once validation has occurred, assessment of the data must be performed on a regular basis. Typically, the assessment is done by a technical person who is familiar with the project and has the interest and skills for analyzing data. A user-friendly tool for visualizing data is needed.
* Regular feedback of the validation and assessment results to the technical teams is important. It improves the quality of the classification. It also provides the teams with the necessary information so that they can determine the appropriate actions for improvement. This feedback is also important in obtaining the necessary commitment from the technical teams. Once they see the objective, quantified data, and the reasonable and feasible actions that result, commitment of the teams to the ODC process usually increases.
* Once the feedback is given to the teams, they can then identify and prioritize actions to be implemented.
When this ODC process has been integrated into the process of the organization, the full range of benefits can be realized. The development process and the resulting product can be monitored and improved on an ongoing basis so that product quality is built in from the earliest stages of development.
Classification and validation of defects. The classification of the defects occurs at two different points in time. When a defect is first detected, or submitted, the ODC submittor attributes of activity, trigger, and impact are classified.
* Activity refers to the actual process step (code inspection, function test, etc.) that was being performed at the time the defect was discovered.
* Trigger describes the environment or condition that had to exist to expose the defects.
* Impact refers to either perceived or actual impact on the customer. When the defect has been fixed, or responded to, the ODC responder attributes, which are target, defect type, qualifier, source, and age, can be classified.
* Target represents the high-level identity (design, code, etc.) of the entity that was fixed.
* Defect type refers to the specific nature of the defect fix. · Qualifier specifies whether the fix that was made was due to missing, incorrect, or extraneous code or information.
* Source indicates whether the defect was found in code written in house, reused from a library, ported from one platform to another, or outsourced to a vendor.
* Age specifies whether the defect was found in new, old (base), rewritten, or refixed code.
Typically, the ODC attributes are captured in the same tool that is used to collect other defect information with minor enhancements. Two methods are used to validate data. The individually classified defects can be reviewed for errors by a person with the appropriate skills. This may be needed only until the team members become comfortable with the classification and its use. It is also possible to use an aggregate analysis of data to help with validation. Although this method of validation is quicker, it does require skills beyond classification. In order to perform a validation using this method, the validator reviews the distribution of defect attributes. If there are internal inconsistencies in the information contained in the data or with the process used, it points to potential problems in the quality of the data, which can be addressed by a more detailed review of the subset of defects under question. Even in cases where there is a misunderstanding by a person in the classification step, it is typically limited to one or two specific aspects, which can be clarified easily. Once the team understands the basic concepts and their use, data quality is no longer a problem.
Data assessment : Once the data have been validated, the data are then ready for assessment.6,7 When doing an assessment, the concern is not with a single defect as is done with causal analysis.8 Rather, trends and patterns in the aggregate data are studied. Data assessment of ODC classified data is based on the relationships of the ODC attributes to one another and to non-ODC attributes such as component, severity, and defect open date. For example, to evaluate product stability, the relationships among the attributes of defect type, qualifier, open date, and severity of defects might be considered. A trend of increasing “missing function” defect type or increasing high-severity defects may indicate that the product stability is decreasing.
Test Cases for compose box In Mail
Functional Tests | System Tests (Load Tests) |
Checkout whether On clicking Compose mail, takes you to "Compose mail page" | --- |
Check whether it has a) To, Cc, Bcc to enter email address. b) Subject, to enter the subject of the mail c) Text body, space to enter the text. | --- |
Check whether a) To, Cc, Bcc accepts text. b) Subject, accepts text. c) Text body, accepts text | a) The number of email addresses that can be entered in To, Cc, and Bcc b) The maximum length of the subject c) The maxim no of words that can be entered in the text space |
Check whether a) In To, Cc, Bcc, you can delete, edit, cut, copy, paste text. b) Subject, you can delete, edit, cut, copy, paste text. c) Text body, you can delete, edit, cut, copy, paste text and format text. | |
Check whether you can attach a file | a) The maximum size of the file that can be attached b) The max no of files that can be attached. |
Check whether you can send, save or discard the mail | |
Performance Testing:---------------If sending mail, receiving mail etc are considered, then we could test the performance of the email server as:
1) Like if one user is connected, what is the time taken to receive a single mail.
2) If 1000s of users are connected, what is the time taken to receive the same mail.
3) If 1000s of users are connected, what is the time taken to receive a huge attachment file.
Usability Testing:-------------------
1) In Usability testing, we can check that, if a part of the email address is entered, the matching email addresses are displayed
2) If the mail is tried to send without the subject or “body of the text”, a warning is displayed.
3) If the To, Cc, Bcc contain an address, without @, it should immediately display a warning that the mail id is invalid.
4) Composing mails should be automatically stored as drafts. You can add some more testcases
Test Cases For Login Window
To check whether the entered User name and Password are vaild or Invaild PREPAIRED BY TESTERINFO
TEST CASE NO:- Authentication
Test DATA USER Name = COES and PASSWORD = COES
Step No Steps Data Expected Results Actual Results
1 Enter User Name and press LOGIN Button User Name= COES Should Display Warning Message Box "Please Enter User name and Password"
2 Enter Password and press LOGIN Button Password= COES Should Display Warning Message Box "Please Enter User name and Password"
3 "Enter user Nameand Password and press LOGIN Button" "USER = COES AND Password = XYZ" Should Display Warning Message Box "Please Enter User name and Password"
4 Enter user Name and Password and press LOGIN Button "USER = XYX AND Password = COES" Should Display Warning Message Box "Please Enter User name and Password"
5 "Enter user Name and Password and press LOGIN Button" "USER = XYZ AND Password = XYZ" Should Display Warning Message Box "Please Enter User name and Password"
6 "Enter user Name and Password and press LOGIN Button" "USER ="" "" AND Password = "" """ Should Display Warning Message Box "Please Enter User name and Password"
7 Enter User Name and Password and press LOGIN Button "USER = COES AND Password = COES" Should navigate to CoesCategoryList.asp page.
8 Enter User Name and Password and press LOGIN Button "USER = ADMIN AND Password = ADMIN" Should navigate to Maintenance page page. I Hope