Wednesday, January 21, 2009

TESTING STRATEGIES

TESTING STRATEGIES:


Testing is the process of finding defects in relation to set of predefined criteria.There are 2 forms of testing.
White box testing
Black box testing
An ideal test environment alternates whitebox and blackbox testing activities,first stabilizing the design,then demonstrating that it performs the required functionality in a reliable manner consistent with performance,user and operational constraints.
White box testing is conducted on code components which may be software units,computer software components or computer software configuration items.


White Box testing:
White box testing of “web server “was carried out with the following points in mind.
• Each statement in a code component was executed at least once
• Each conditional branch in the code component are executed
• Execution of paths with boundary and out-of bounds input values were carried out.
• Verification of the integrity of internal interfaces was done
• Verification of architecture integrity across a range of conditions were provided
• Verification of database design and structure was also carried out.
White box test has verified that the software design is valid and that it was built according to the RAD design. White box testing has traced the configuration management (CM)-controlled design and internal interface specifications. These specifications have been identified as an integral part of the configuration control process.

Black box testing:
Black box testing was conducted on integrated, functional components whose design integrity has been verified through completion of traceable white box tests. As with white box testing, these components include software units, CSCs, or CSCIs. Black box testing traces to requirements focusing on system externals. It validates that the software meets
requirements without regard to the paths of execution taken to meet each requirement. It is the type of test conducted on software that is an integration of code units.
The black box testing process includes:
• Validation of functional integrity in relation to external servlet input.
• Validation of all external interfaces conditions
• Validation of the ability of the system, software, or hardware to recover from the effect of unexpected or anomalous external or environmental conditions
• Validation of the system’s ability to address outof-bound input, error recovery, communication,and stress conditions.The try catch blocks are provided for these actions.
Black box tests on ‘web server”has validated that an integrated software configuration satisfies the requirements contained in a CM-controlled requirement
Ideally, each black box test should be preceded by a white box test that stabilizes the design.

The levels of test include:

Level 0—These tests consist of a set of structured inspections tied to each product placed under configuration management. The purpose of Level 0 tests is to remove defects at the point where they occur, and before they affect any other product.

Level 1—These white box tests qualify the code against standards and unit design specification. Level 1 tests trace to the Software Design File (SDF) and are usually
executed using test harnesses or drivers. This is the only test level that focuses on code.

Level 2—These white box tests integrate qualified CSCs into an executable CSCI configuration. Level 2 tests trace to the Software Design Document (SWDD). The
focus of these tests is the inter-CSC interfaces.

Level 3—These black box tests execute integrated CSCIs to assure that requirements of the SoftwareRequirements Specification (SRS) have been implemented and that the CSCI executes in an acceptable manner. The results of Level 3 tests are reviewed and approved by the acquirer of the product.

Level 4—These white box tests trace to the SystemSubsystem Design Document (SSDD). Level 4 tests integrate qualified CSCIs into an executable systemconfiguration by interfacing independent CSCIs and then integrating the executable software configuration
with the target hardware.

Level 5—These black box tests qualify an executable system configuration to assure that the requirements of the system have been met and that the basic concept of the system has been satisfied. Level 5 tests trace to the System Segment Specification (SSS). This test level usually results in acceptance or at least approval of the system for customer-based testing.

Level 6—Level 6 tests integrate the qualified system into the operational environment.

Level 7—These independent black box tests trace to operational requirements and specifications.

Level 8—These black box tests are conducted by the installation team to assure the system works correctly when installed and performs correctly when connected
to live site interfaces. Level 8 tests trace to installation manuals and use diagnostic hardware and software.

BASIC TESTING CONCEPTS
Testing is no longer considered a stand-alone and end of-the-process evolution to be completed simply as an acquisition milestone. Rather, it has become a highly
integral process that complements and supports other program activities while offering a means to significantly reduce programmatic risks. Early defect identification is possible through comprehensive testing and monitoring. Effective solutions and mitigation strategies emerge from proactive program management practices once risks have been identified.

Examples of Architectural Styles / Patterns

Examples of Architectural Styles / Patterns

There are many common ways of designing computer software modules and their communications, among them:
· Blackboard
· Client-server
· Distributed computing
· Event Driven Architecture
· Implicit invocation
· Monolithic application
· Peer-to-peer
· Pipes and filters
· Plugin
· Representational State Transfer
· Structured (module-based but usually monolithic within modules)
· Software componentry (strictly module-based, usually object-oriented programming within modules, slightly less monolithic)
· Service-oriented architecture
· Search-oriented architecture
· Space based architecture
· Shared nothing architecture
· Three-tier model

Regression testing - types - uses

Regression testing is any type of software testing which seeks to uncover regression bugs. Regression bugs occur whenever software functionality that previously worked as desired, stops working or no longer works in the same way that was previously planned. Typically regression bugs occur as an unintended consequence of program changes.

Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged.Experience has shown that as software is developed, this kind of reemergence of faults is quite common. Sometimes it occurs because a fix gets lost through poor revision control practices (or simple human error in revision control), but often a fix for a problem will be "fragile" in that it fixes the problem in the narrow case
where it was first observed but not in more general cases which may arise over the lifetime of the software. Finally, it has often been the case that when some feature is redesigned, the same mistakes will be made in the redesign that were made in the original implementation of the feature.

Types of regression
· Local - changes introduce new bugs.
· Unmasked - changes unmask previously existing bugs.
· Remote - Changing one part breaks another part of the program. For example, Module A writes to a database. Module B reads from the database. If changes to what Module A writes to the database break Module B, it is remote regression.

There's another way to classify regression.
· New feature regression - changes to code that is new to release 1.1 break other code that is new to release 1.1.
· Existing feature regression - changes to code that is new to release 1.1 break code that existed in release 1.0.

Mitigating regression risk
· Complete test suite repetition
· Regression test automation (GUI, API, CLI)
· Partial test repetition based on traceability and analysis of technical and business risks
· Customer or user testing
oBeta - early release to both potential and current customers
oPilot - deploy to a subset of users
oParallel - users use both old and new systems simultaneously
· Use larger releases. Testing new functions often covers existing functions. The more new features in a release, the more "accidental" regression testing.
· Emergency patches - these patches are released immediately, and will be included in future maintenance releases.

Uses
Regression testing can be used not only for testing the correctness of a program, but it is also often used to track the quality of its output. For instance in the design

of a compiler, regression testing should track the code size, simulation time and compilation time of the test suite cases.

Purpose Of Integration Testing

The purpose of integration testing is to verify functional, performance and reliability requirements placed on major design items. These "design items", i.e. assemblages (or groups of units), are exercised through their interfaces using black box testing, success and error cases being simulated via appropriate parameter and data inputs. Simulated usage of shared data areas and inter-process communication is tested and individual subsystems are exercised through their input
interface. Test cases are constructed to test that all components within assemblages interact correctly,
for example across procedure calls or process activations, and
this is done after testing individual modules, i.e. unit testing.

Orthogonal Defect Classification (ODC)

Improving software testing via ODC

Orthogonal Defect Classification (ODC)
is a methodology used to classify software defects. When combined with a set of data analysis techniques designed to suit the software development process, ODC provides a powerful way to evaluate the development process and software product. Software systems continue to grow steadily in complexity and size. The business demands for shorter development cycles have forced software development organizations to struggle to find a compromise among functionality, time to market, and quality. Lack of skills, schedule pressures, limited resources, and the highly manual nature of software development have led to problems for both large and small organizations alike. These problems include incomplete design, inefficient testing, poor quality, high development and maintenance costs, and poor customer satisfaction. As a way to prevent defects from being delivered, or “escaping,” to
customers, companies are investing more resources in the testing of software. In addition to improving the many other aspects of testing (e.g., the skill level of testers, test automation, development of new tools, and the testing process), it is important to have a way to assess the current testing process for its strengths and weaknesses and to highlight the risks and exposures that exist. Although it is well documented that it is less expensive to find defects earlier in the process and certainly much more expensive to fix them once they are in the field,1 testers are not usually aware of what their specific risks and exposures are or how to strengthen testing to meet their quality goals.

the use of Orthogonal Defect Classification (ODC),2 a defect analysis technique, to evaluate testing processes. Three case studies are presented.

ODC deployment process. The process for deploying ODC has evolved over the last 10 years. However, the following basic steps are critical in order for the ODC

deployment to be successful:
* Management must make a commitment to the deployment of ODC and the implementation of actions resulting from the ODC assessments. · The defect data must be classified by the technical teams and stored in an easily accessible database.
* The classified defects are then validated on a regular basis to ensure the consistency and correctness of the classification.
·Once validation has occurred, assessment of the data must be performed on a regular basis. Typically, the assessment is done by a technical person who is familiar with the project and has the interest and skills for analyzing data. A user-friendly tool for visualizing data is needed.
* Regular feedback of the validation and assessment results to the technical teams is important. It improves the quality of the classification. It also provides the teams with the necessary information so that they can determine the appropriate actions for improvement. This feedback is also important in obtaining the necessary commitment from the technical teams. Once they see the objective, quantified data, and the reasonable and feasible actions that result, commitment of the teams to the ODC process usually increases.
* Once the feedback is given to the teams, they can then identify and prioritize actions to be implemented.

When this ODC process has been integrated into the process of the organization, the full range of benefits can be realized. The development process and the resulting product can be monitored and improved on an ongoing basis so that product quality is built in from the earliest stages of development.

Classification and validation of defects. The classification of the defects occurs at two different points in time. When a defect is first detected, or submitted, the ODC submittor attributes of activity, trigger, and impact are classified.
* Activity refers to the actual process step (code inspection, function test, etc.) that was being performed at the time the defect was discovered.
* Trigger describes the environment or condition that had to exist to expose the defects.
* Impact refers to either perceived or actual impact on the customer. When the defect has been fixed, or responded to, the ODC responder attributes, which are target, defect type, qualifier, source, and age, can be classified.
* Target represents the high-level identity (design, code, etc.) of the entity that was fixed.
* Defect type refers to the specific nature of the defect fix. · Qualifier specifies whether the fix that was made was due to missing, incorrect, or extraneous code or information.
* Source indicates whether the defect was found in code written in house, reused from a library, ported from one platform to another, or outsourced to a vendor.
* Age specifies whether the defect was found in new, old (base), rewritten, or refixed code.

Typically, the ODC attributes are captured in the same tool that is used to collect other defect information with minor enhancements. Two methods are used to validate data. The individually classified defects can be reviewed for errors by a person with the appropriate skills. This may be needed only until the team members become comfortable with the classification and its use. It is also possible to use an aggregate analysis of data to help with validation. Although this method of validation is quicker, it does require skills beyond classification. In order to perform a validation using this method, the validator reviews the distribution of defect attributes. If there are internal inconsistencies in the information contained in the data or with the process used, it points to potential problems in the quality of the data, which can be addressed by a more detailed review of the subset of defects under question. Even in cases where there is a misunderstanding by a person in the classification step, it is typically limited to one or two specific aspects, which can be clarified easily. Once the team understands the basic concepts and their use, data quality is no longer a problem.

Data assessment : Once the data have been validated, the data are then ready for assessment.6,7 When doing an assessment, the concern is not with a single defect as is done with causal analysis.8 Rather, trends and patterns in the aggregate data are studied. Data assessment of ODC classified data is based on the relationships of the ODC attributes to one another and to non-ODC attributes such as component, severity, and defect open date. For example, to evaluate product stability, the relationships among the attributes of defect type, qualifier, open date, and severity of defects might be considered. A trend of increasing “missing function” defect type or increasing high-severity defects may indicate that the product stability is decreasing.

Test Cases for compose box In Mail

How to write a test case on compose box in mail?


Functional Tests

System Tests (Load Tests)

Checkout whether

On clicking Compose mail, takes you to "Compose mail page"

---

Check whether it has

a) To, Cc, Bcc to enter email address.

b) Subject, to enter the subject of the mail

c) Text body, space to enter the text.

---

Check whether

a) To, Cc, Bcc accepts text.

b) Subject, accepts text.

c) Text body, accepts text

a) The number of email addresses that can be entered in To, Cc, and Bcc

b) The maximum length of the subject

c) The maxim no of words that can be entered in the text space

Check whether

a) In To, Cc, Bcc, you can delete, edit, cut, copy, paste text.

b) Subject, you can delete, edit, cut, copy, paste text.

c) Text body, you can delete, edit, cut, copy, paste text and format text.

Check whether you can attach a file

a) The maximum size of the file that can be attached

b) The max no of files that can be attached.

Check whether you can send, save or discard the mail


Performance Testing:---------------If sending mail, receiving mail etc are considered, then we could test the performance of the email server as:
1) Like if one user is connected, what is the time taken to receive a single mail.
2) If 1000s of users are connected, what is the time taken to receive the same mail.
3) If 1000s of users are connected, what is the time taken to receive a huge attachment file.

Usability Testing:-------------------
1) In Usability testing, we can check that, if a part of the email address is entered, the matching email addresses are displayed
2) If the mail is tried to send without the subject or “body of the text”, a warning is displayed.
3) If the To, Cc, Bcc contain an address, without @, it should immediately display a warning that the mail id is invalid.
4) Composing mails should be automatically stored as drafts. You can add some more testcases

Test Cases For Login Window

How to write test case of Login window ?

To check whether the entered User name and Password are vaild or Invaild PREPAIRED BY TESTERINFO
TEST CASE NO:- Authentication
Test DATA USER Name = COES and PASSWORD = COES
Step No Steps Data Expected Results Actual Results

1 Enter User Name and press LOGIN Button User Name= COES Should Display Warning Message Box "Please Enter User name and Password"

2 Enter Password and press LOGIN Button Password= COES Should Display Warning Message Box "Please Enter User name and Password"

3 "Enter user Nameand Password and press LOGIN Button" "USER = COES AND Password = XYZ" Should Display Warning Message Box "Please Enter User name and Password"

4 Enter user Name and Password and press LOGIN Button "USER = XYX AND Password = COES" Should Display Warning Message Box "Please Enter User name and Password"

5 "Enter user Name and Password and press LOGIN Button" "USER = XYZ AND Password = XYZ" Should Display Warning Message Box "Please Enter User name and Password"

6 "Enter user Name and Password and press LOGIN Button" "USER ="" "" AND Password = "" """ Should Display Warning Message Box "Please Enter User name and Password"

7 Enter User Name and Password and press LOGIN Button "USER = COES AND Password = COES" Should navigate to CoesCategoryList.asp page.

8 Enter User Name and Password and press LOGIN Button "USER = ADMIN AND Password = ADMIN" Should navigate to Maintenance page page. I Hope

Integration Testcases

How to write Integration Testcases?

Design document will contail all the system components so referring it OR through traceability matrix it describe the maping between FRS and testcases --u can write integration

"Integration Test Cases are written based on logical design to ensure complete coverage of all logical design elements."

In integration testing you need to write test cases for the Interface between the modules, For eg: if you are writing test cases for the integration of 3 modules M1, M2 and M3. you need to write the test cases for the individual behaviour of M1, M2 and M3. After that you need to write the test cases for their combinations like when we integrate M1 and M2, to test how will be the behaviour of M1+M2 and to verify the interface between them. in the same way for the combinations of M2, M3 M3,M1 and M1, M2 and M3. Here when we are integrating different modules, u need to write test cases to verify mainly the interface between the modules and to verify wheather the behaviour is meeting the requirements. The Integration test cases will be in the same format of normal testcases, but with some more additional fields like, Modules integrated.

Types Of Testing

What kinds of testing should be considered?

* Black box testing -
not based on any knowledge of internal design or code. Tests are based on requirements and functionality.

* White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.

* unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.

* incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

* integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

* functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)

* system testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

* end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

* sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

* regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

* acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

* load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

* stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

* performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

* usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

* install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.

* recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

* failover testing - typically used interchangeably with 'recovery testing'

* security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

* compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

* exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

* ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

* context-driven testing - testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical
equipment software would be completely different than that for a low-cost computer game.

* user acceptance testing - determining if software is satisfactory to an end-user or customer.

* comparison testing - comparing software weaknesses and strengths to competing products.

* alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

* beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

* mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

Test Cases For Withdraw Module In Banking

test case for withdraw module in banking proj...

Step1: when the balance in the account is nill, try to withdraw some amount(amount>0) should display msg as " insufficient funds in acc"
step 2:when the account has some balance amount, try to withdraw amount(amount>balance amount in account), should display "insufficient funds in acc"
step 3: when the account has some balance amount, enter a amount (amount<=balance amount), should withdrawn correct amount from account. step 4: when the account has some balance amount, enter the amount as 0, should display msg as withdrawl amount should be > 0 and should be in multile of hundreds( varies depending on reqs docs).
In the case of Minimum balance mandatory in the Account:
step 5: When the account has balance amount, try to withdraw whole amount , should display msg as " Minimum balance should be maintained".
step 6: When the account has balance amount=minimum balance, try to withdraw any amount , should display msg as " Minimum balance should be maintained".

Design predicates And Types of Calls

Design predicates

Design predicates are a method, invented by Thomas McCabe, to quantify the complexity of the integration of two units of software. Each of the four types of design predicates have an associated integration complexity rating. For pieces of code that apply more than one design predicate, integration complexity ratings can be combined.
The sum of the integration complexity for a unit of code, plus one, is the maximum number of test cases necessary to exercise the integration fully. Though a test engineer can typically reduce this by covering as many previously uncovered design predicates as possible with each new test. Also, some combinations of design predicates might be logically impossible.

Types of Calls

Unconditional Call
Unit A always calls unit B. This has an integration complexity of 0. For example:
unitA::functionA() {
unitB->functionB();
}

Conditional Call
Unit A may or may not call unit B. This integration has a complexity of 1, and needs two tests: one that calls B, and one that doesn't.
unitA::functionA() {
if (condition)
unitB->functionB();
}

Mutually Exclusive Conditional Call
This is like a programming language's switch statement. Unit A calls exactly one of several possible units. Integration complexity is n - 1, where n is the number of possible units to call.
unitA::functionA() {
switch (condition) {
case 1:
unitB->functionB();
break;
case 2:
unitC->functionC();
break;
...
default:
unitN->functionN();
break;
}
}

Iterative Call
In an iterative call, unit A calls unit B at least once, but maybe more. This integration has a complexity of 1. It also requires two tests: one that calls unit B once, and one test that calls it more than once.
unitA::functionA() {
do {
unitB->functionB();
} while (condition);
}

Combining Calls
Any particular integration can combine several types of calls. For example, unit A may or may not call unit B; and if it does, it can call it one or more times. This integration combines a conditional call, with its integration complexity of 1, and an iterative call, with its integration complexity of 1. The combined integration complexity totals 2.
unitA::functionA() {
if (someNumber > 0) {
for ( i = 0 ; i < someNumber ; i++ ) {
unitB->functionB();
}
}
}
Since the number of necessary tests is the total integration complexity plus one, this integration would require 3 tests. In one, where someNumber isn't greater than 0, unit B isn't called. In another, where someNumber is 1, unit B is called once. And in the final, someNumber is greater than 1, unit B is called more than once.

Cyclomatic complexity

Cyclomatic complexity is a software metric (measurement). It was developed by Thomas McCabe and is used to measure the complexity of a program. It directly measures the number of linearly independent paths through a program's source code.

The concept, although not the method, is somewhat similar to that of general text complexity measured by the Flesch-Kincaid Readability Test.

Cyclomatic complexity is computed using a graph that describes the control flow of the program. The nodes of the graph correspond to the commands of a program. A directed edge connects two nodes if the second command might be executed immediately after the first command.

Definition

M = E − N + 2P
where
M = cyclomatic complexity
E = the number of edges of the graph
N = the number of nodes of the graph
P = the number of connected components.

"M" is alternatively defined to be one larger than the number of decision points (if/case-statements, while-statements, etc) in a module (function, procedure, chart

node, etc.), or more generally a system.
Separate subroutines are treated as being independent, disconnected components of the program's control flow graph.


Alternative definition
v(G) = e − n + p
G is a program's flowgraph
e is the number of edges (arcs) in the flowgraph
n is the number of nodes in the flowgraph
p is the number of connected components

Alternative way
There is another simple way to determine the cyclomatic number. This is done by counting the number of closed loops in the flow graph, and incrementing that number by one.
i.e.
M = Number of closed loops + 1
where
M = Cyclomatic number.

Implications for Software Testing
·M is a lower bound for the number of possible paths through the control flow graph.
·M is an upper bound for the number of test cases that are necessary to achieve a complete branch coverage.
For example, consider a program that consists of two sequential if-then-else statements.
if (c1) {
f1();
} else {
f2();
}
if (c2) {
f3();
} else {
f4();
}
· To achieve a complete branch coverage, two test cases are sufficient here.
· For a complete path coverage, four test cases are necessary.
· The cyclomatic number M is three, falling in the range between these two values, as it does for any program.

Tuesday, January 20, 2009

The goals of Software Configuration Management (SCM )

The goals or Purposesof SCM are generally:

·Configuration Identification- What code are we working with?
·Configuration Control- Controlling the release of a product and its changes.
·Status Accounting- Recording and reporting the status of components.
·Review- Ensuring completeness and consistency among components.
·Build Management- Managing the process and tools used for builds.
·Process Management- Ensuring adherence to the organization's development process.
·Environment Management- Managing the software and hardware that host our system.
·Teamwork- Facilitate team interactions related to the process.
·Defect Tracking- making sure every defect has traceability back to the source

Software Configuration Management (SCM)

Software Configuration Management (SCM) is part of configuration management (CM). Roger Pressman, in his book Software Engineering: A Practitioner's Approach, says that software configuration management (SCM) is a "set of activities designed to control change by identifying the work products that are likely to change, establishing relationships among them, defining mechanisms for managing different versions of these work products, controlling the changes imposed, and auditing and reporting on the changes made." In other words, "SCM is a methodology to control and manage a software development project."

SCM concerns itself with answering the question: somebody did something, how can one reproduce it? Often the problem involves not reproducing "it" identically, but with controlled, incremental changes. Answering the question will thus become a matter of comparing different results and of analysing their differences. Traditional CM typically focused on controlled creation of relatively simple products. Nowadays, implementers of SCM face the challenge of dealing with relatively minor increments under their own control, in the context of the complex system being developed.

The specific terminology of SCM,

·Source configuration management (Often used to indicate that a variety of artifacts may be managed and versioned, including software code, documents, design models, and even the directory structure itself.)
·Revision control (also known as version control or source control)
.See also List of revision control software.
·Source code
·Change management
·Configuration item
·software configuration
·Change set
·Baseline (configuration management)

In particular, the former vendor, Atria (later Rational Software, now a part of IBM), used "SCM" to stand for "Software Configuration Management".
Analyst firm, Gartner Inc., uses the term Software Change and Configuration Management or (SCCM).

Software deployment Theory

Software deployment is all of the activities that make a software system available for use.
The general deployment process consists of several interrelated activities with possible transitions between them. These activities can occur at the producer site or at the consumer site or both. Because every software system is unique, the precise processes or procedures within each activity can hardly be defined. Therefore, "deployment" should be interpreted as a general process that has to be customized according to specific requirements or characteristics.

Deployment activities

Release
The release activity follows from the completed development process. It includes all the operations to prepare a system for assembly and transfer to the customer site. Therefore, it must determine the resources required to operate at the customer site and collect information for carrying out subsequent activities of deployment process.

Install
The Install is the initial insertion of software into a customer site. Currently, this activity is best supported by specialized tools. The two sub-activities are transfer and configure. The former is to move the product from the producer site to the customer site, while the latter one refers to all the configuration operations that make the system ready for customer users.

Activate
Activation is the activity of starting up the executable component of software. For simple system, it involves establishing some form of command for execution. For complex systems, it should make all the supporting systems ready to use.
In larger software deployments, the working copy of the software might be installed on a production server in a production environment. Other versions of the deployed software may be installed in a test environment, development environment and disaster recovery environment.

Deactivate
Deactivation is the inverse of activation, and refers to shutting down any executing components of a system. Deactivation is often required to perform other deployment activities, e.g., a software system may need to be deactivated before an update can be performed.

Adapt
The adaptation activity is also a process to modify a software system that has been previously installed. It differs from updating in that adaptations are initiated by local events such as changing the environment of customer site, while updating is mostly started from remote software producer.

Uninstall
Uninstallation is the inverse of installation. It is a remove of a system that is no longer required. It also involves some reconfiguration of other software systems in order to remove the uninstalled system’s files and dependencies. This is not to be confused with the term "deinstall" which is not actually a word.

Retire
Ultimately, a software system is marked as obsolete and support by the producers is withdrawn. It is the end of the life cycle of a software product.

Types of Testing Documentation

Types of Documentation

Documentation is an important part of software engineering. Types of documentation include:
·Architecture/Design - Overview of software. Includes relations to an environment and construction principles to be used in design of software components.
·Technical - Documentation of code, algorithms, interfaces, and APIs.
·End User - Manuals for the end-user, system administrators and support staff.
·Marketing - Product briefs and promotional collateral.

Architecture/Design Documentation
Architecture documentation is a special breed of design documents. In a way, architecture documents are third derivative from the code (design documents being second derivative, and code documents being first). Very little in the architecture documents is specific to the code itself. These documents do not describe how to program a particular routine, or even why that particular routine exists in the form that it does, but instead merely lays out the general requirements that would motivate the existence of such a routine. A good architecture document is short on details but thick on explanation. It may suggest approaches for lower level design,
but leave the actual exploration trade studies to other documents.

Technical Documentation
This is what most programmers mean when using the term software documentation. When creating software, code alone is insufficient. There must be some text along with it to describe various aspects of its intended operation. It is important for the code documents to be thorough, but not so verbose that it becomes difficult to maintain them.

Often, tools such as Doxygen, javadoc, ROBODoc, POD or TwinText can be used to auto-generate the code documents—that is, they extract the comments from the source code and create reference manuals in such forms as text or HTML files. Code documents are often organized into a reference guide style, allowing a programmer to quickly look up an arbitrary function or class

User Documentation
Unlike code documents, user documents are usually far divorced from the source code of the program, and instead simply describe how it is used.

In the case of a software library, the code documents and user documents could be effectively equivalent and are worth conjoining, but for a general application this is not often true. On the other hand, the Lisp machine grew out of a tradition in which every piece of code had an attached documentation string. In combination with
strong search capabilities (based on a Unix-like apropos command), and online sources, Lispm users could look up documentation and paste the associated function directly into their own code. This level of ease of use is unheard of in putatively more modern systems.

Typically, the user documentation describes each feature of the program, and assists the user in realising these features. A good user document can also go so far as to provide thorough troubleshooting assistance. It is very important for user documents to not be confusing, and for them to be up to date. User documents need not be organized in any particular way, but it is very important for them to have a thorough index. Consistency and simplicity are also very valuable. User documentation is considered to constitute a contract specifying what the software will do

Marketing Documentation
For many applications it is necessary to have some promotional materials to encourage casual observers to spend more time learning about the product. This form of documentation has
three purposes:-
1. To excite the potential user about the product and instill in them a desire for becoming more involved with it.
2. To inform them about what exactly the product does, so that their expectations are in line with what they will be receiving.
3. To explain the position of this product with respect to other alternatives.

One good marketing technique is to provide clear and memorable catch phrases that exemplify the point we wish to convey, and also emphasize the interoperability
of the program with anything else provided by the manufacturer

Software Quality Assurance (SQA)

Software Quality Assurance (SQA) consists of the software engineering processes and methods used to ensure quality. SQA encompasses the entire software development process, which may include processes such as reviewing requirements documents, source code control, code reviews, change management, configuration management, release management and of course, software testing.

Software quality assurance is related to the practice of quality assurance in product manufacturing. There are, however, some notable differences between software and a manufactured product. These differences all stem from the fact that the manufactured product is physical and can be seen whereas the software product is not visible. Therefore its function, benefit and costs are not as easily measured. What's more, when a manufactured product rolls off the assembly line, it is essentially a complete, finished product, whereas software is never finished. Software lives, grows, evolves, and metamorphoses, unlike its tangible counterparts. Therefore, the processes and methods to manage, monitor, and measure its ongoing quality are as fluid and sometimes elusive as are the defects that they are meant to keep in check.

Software design vs. software implementation

Software design vs. software implementation

Software testers should not be limited only to testing software implementation, but also to testing software design. With this assumption, the role and involvement of testers will change dramatically. The test cycle will change too. To test software design, testers will review requirement and design specifications together with designer and programmer. This will help to identify bugs earlier.

Exploratory vs. Scripted Testing

Exploratory vs. scripted

Exploratory testing means simultaneous test design and test execution with an emphasis on learning. Scripted testing means that learning and test design happen prior to test execution, and quite often the learning has to be done again during test execution. Exploratory testing is very common, but in most writing and training about testing it is barely mentioned and generally misunderstood. Some writers consider it a primary and essential practice. Structured exploratory testing is a compromise when the testers are familiar with the software. A vague test plan, known as a test charter, is written up, describing what functionalities need to be tested but not how, allowing the individual testers to choose the method and steps of testing.

There are two main disadvantages associated with a primarily exploratory testing approach. The first is that there is no opportunity to prevent defects, which can happen when the designing of tests in advance serves as a form of structured static testing that often reveals problems in system requirements and design. The second is that, even with test charters, demonstrating test coverage and achieving repeatability of tests using a purely exploratory testing approach is difficult. For this reason, a blended approach of scripted and exploratory testing is often used to reap the benefits while mitigating each approach's disadvantages.

Agile vs. traditional Testing

Agile vs. traditional

Starting around 1990, a new style of writing about testing began to challenge what had come before. The seminal work in this regard is widely considered to be Testing Computer Software, by Cem Kaner.[5] Instead of assuming that testers have full access to source code and complete specifications, these writers, including Kaner and James Bach, argued that testers must learn to work under conditions of uncertainty and constant change. Meanwhile, an opposing trend toward process "maturity" also gained ground, in the form of the Capability Maturity Model. The agile testing movement (which includes but is not limited to forms of testing practiced on agile development projects) has popularity mainly in commercial circles, whereas the CMM was embraced by government and military software providers.

However, saying that "maturity models" like CMM gained ground against or opposing Agile testing may not be right. Agile movement is a 'way of working', while CMM is a process improvement idea.

But another point of view must be considered: the operational culture of an organization. While it may be true that testers must have an ability to work in a world of uncertainty, it is also true that their flexibility must have direction. In many cases test cultures are self-directed and as a result fruitless; unproductive results can ensue. Furthermore, providing positive evidence of defects may either indicate that you have found the tip of a much larger problem, or that you have exhausted allpossibilities. A framework is a test of Testing. It provides a boundary that can measure (validate) the capacity of our work. Both sides have, and will continue to
argue the virtues of their work. The proof however is in each and every assessment of delivery quality. It does little good to test systematically if you are too narrowly focused. On the other hand, finding a bunch of errors is not an indicator that Agile methods was the driving force; you may simply have stumbled upon an obviously poor piece of work.

Code coverage - white Box Technique

Code coverage is inherently a white box testing activity. The target software is built with special options or libraries and/or run under a special environment such that every function that is exercised (executed) in the program(s) are mapped back to the function points in the source code. This process allows developers and quality assurance personnel to look for parts of a system that are rarely or never accessed under normal conditions (error handling and the like) and helps reassure test engineers that the most important conditions (function points) have been tested.

Test engineers can look at code coverage test results to help them devise test cases and input or configuration sets that will increase the code coverage over vital functions. Two common forms of code coverage used by testers are statement (or line) coverage, and path (or edge) coverage. Line coverage reports on the execution footprint of testing in terms of which lines of code were executed to complete the test. Edge coverage reports which branches, or code decision points
were executed to complete the test. They both report a coverage metric, measured as a percentage.

Generally code coverage tools and libraries exact a performance and/or memory or other resource cost which is unacceptable to normal operations of the software. Thus they are only used in the lab. As one might expect there are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing.

There are also some sorts of defects which are affected by such tools. In particular some race conditions or similar real time sensitive operations can be masked when run under code coverage environments; and conversely some of these defects may become easier to find as a result of the additional overhead of the testing code.

Code coverage may be regarded as a more up-to-date incarnation of debugging in that the automated tools used to achieve statement and path coverage are often referred to as “debugging utilities”. These tools allow the program code under test to be observed on screen whilst the program is executing, and commands and keyboard function keys are available to allow the code to be “stepped” through literally line by line. Alternatively it is possible to define pinpointed lines of code as “breakpoints” which will allow a large section of the code to be executed, then stopping at that point and displaying that part of the program on screen. Judging
where to put breakpoints is based on a reasonable understanding of the program indicating that a particular defect is thought to exist around that point. The data values held in program variables can also be examined and in some instances (with care) altered to try out “what if” scenarios. Clearly use of a debugging tool is more the domain of the software engineer at a unit test level, and it is more likely that the software tester will ask the software engineer to perform this. However, it is

useful for the tester to understand the concept of a debugging tool.

A sample testing cycle

Testing Cycle With Simple explanation

Although testing varies between organizations, there is a cycle to testing:

1.Requirements Analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and under what parameter those tests work.

2.Test Planning: Test Strategy, Test Plan(s), Test Bed creation. A lot of activities will be carried out during testing, so that a plan is needed.

3.Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use in testing software.

4.Test Execution: Testers execute the software based on the plans and tests and report any errors found to the development team.

5.Test Reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.

6.Retesting the Defects

Test cases, suites, scripts, and scenarios

A test case is a software testing document,which consists of event, action, input, output, expected result, and actual result. Clinically defined (IEEE 829-1998) a test case is an input and an expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected.

The term test script is the combination of a test case, test procedure, and test data. Initially the term was derived from the product of work created by automated regression test tools. Today, test scripts can be manual, automated, or a combination of both.

The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.

Collections of test cases are sometimes incorrectly termed a test plan. They might correctly be called a test specification. If sequence is specified, it can be called a test script, scenario, or procedure.

Test Automation Evolution

Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.

Over the past few years, tools that help programmers quickly create applications with graphical user interfaces have dramatically improved programmer productivity. This has increased the pressure on testers, who are often perceived as bottlenecks to the delivery of software products. Testers are being asked to test more and more code in less and less time. Test automation is one way to do this, as manual testing is time consuming. As and when different versions of software are released, the new features will have to be tested manually time and again. But, now there are tools available that help the testers in the automation of the GUI which reduce the test time as well as the cost; other test automation tools support execution of performance tests.

Many test automation tools provide record and playback features that allow users to record interactively user actions and replay it back any number of times, comparing actual results to those expected. However, reliance on these features poses major reliability and maintainability problems. Most successful automators use a software engineering approach, and as such most serious test automation is undertaken by people with development experience.

A growing trend in software development is to use testing frameworks such as the xUnit frameworks (for example, JUnit and NUnit) which allow the code to conduct unit tests to determine whether various sections of the code are acting as expected in various circumstances. Test cases describe tests that need to be run on the program to verify that the program runs as expected. All three aspects of testing can be automated.

Another important aspect of test automation is the idea of partial test automation, or automating parts but not all of the software testing process. If, for example, an oracle cannot reasonably be created, or if fully automated tests would be too difficult to maintain, then a software tools engineer can instead create testing tools to help human testers perform their jobs more efficiently. Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with oracles), defect logging, etc., without necessarily automating tests in an end-to-end fashion.

Test automation is expensive and it is an addition, not a replacement, to manual testing. It can be made cost-effective in the longer term though, especially in regression testing. One way to generate test cases automatically is model-based testing where a model of the system is used for test case generation, but research continues into a variety of methodologies for doing so.

· UnitTest The C++ Unit Test Framework
· BuildBot
· Dogtail
· Fanfare Group
· Test Automation Framework
· HttpUnit
· JUnit
· Keyword-driven testing
· Model-based testing
· NUnit
· Parasoft
· PyUnit
· QuickTest Professional
· Software testing
· Unit test
· Microsoft Visual Test
· WET Web Tester
· WinRunner
· Ranorex
· JMeter

System testing And Types


System testing
(Testing the whole system) is actually done to the entire system against the Functional Requirement Specification(s) (FRS) and/or the System Requirement Specification (SRS). Moreover, the system testing is an investigatory testing phase, where the focus is to have almost a destructive attitude and test not only the design, but also the behaviour and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specification(s).
One could view System testing as the final destructive testing phase before user acceptance testing.

The following examples are different types of testing that should be considered during System testing:
· User interface testing
· Usability testing
· Performance testing
· Compatibility testing
· Error handling testing
· Load testing
· Volume testing
· Stress testing
· User help testing
· Security testing
· Scalability testing
· Capacity testing
· Sanity testing
· Smoke testing
· Exploratory testing
· Adhoc testing
· Regression testing
· Reliability testing
· Recovery testing
· Installation testing
· Idempotency testing
· Maintenance testing
· Accessibility testing, including compliance with:
o Americans with Disabilities Act of 1990
o Section 508 Amendment to the Rehabilitation Act of 1973
o Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
Although different testing organizations may prescribe different tests as part of System testing, this list serves as a general framework or foundation to begin with

Test Cases For MS Word

Write test cases for copy & paste in MS Word

For negative testing of copy n paste we check for all other commands of the package, like-

1)Pressing copy on selected content - it should not bold / italic etc.
2)on pressing highlighted content should not cut.

For positive testing of copy we can design test cases like –
1) It should be dehighlited if no content is selected.
2) Copy icon and command should be highlighted just after selecting any content.
3) Without any selection when we press hot key “ctrl+c” it should display clipboard.
4) It should copy again n again to different selections and store it on the clipboard.
5) It should copy content with their formats.

For Paste-
1) paste icon and command should be dehilighted at beginning.
2) It should highlited just after copy something.
3) It should paste same content with their styles when we use hot key/ command/ icon.
4) It should paste n time, same content(last copied content).
5) It should paste copied content from clipboard as per our selection from clipboard.

Test Cases For Pen

what can be the various test cases for a pen?

1.To check the pen company
2.to check the pen type
3.to check the pen cap is present or not
4.to check the pen ink is filled or not
5. to check the pen writing or not
6.to check the ink color i.e black ro blue
7.to check the pen color
8.to check weather the pen is used to write all types of papers or not
9.to check the ink capacity of the pen
10.to check the pen product by fiber or plastic or glass
11 to check how long the pen is going to write
12)to check Type of pen (ink,ball,dot etc.).
13)to check Able to write (bold/thin)
14)Grip of the in hand (how comforting is it to hold and write with it?)
15)Type of ink required (ink,gel, fluid, refillable??)
16)Does it have a pocket holding gripe for better carriage.
17)Does the ink leaks in hot conditions or it seizes to flow in cold conditions
18)throw the pen down on floor..to check how much brittle it is ,is it broke down or not.

The test cases should be of high standards.
1.Functional test cases (here we test the functionality of pen e.g. writing in flow or not etc.)
2.User-Interface test caes ( look n feel of pen e.g. cap facility, easy to hold, should be attractive etc.)
3. Stress Testing ( check the temperature dependency e.g can it work if temp is high or low etc.)
4. Performance Testingif we categories every problem then we can write much test cases for that.

Test Cases For Computer Keyboard

write a test case for computer keyboard

useriNterface case:
check the clor of the key board
check the color of letters on the keys
check the height and width of the key board
keyboard keys should be as per the ANSI standard
keyboard should be plateform independent
keys are mapping with the commands
check the serial or parallel port
check if keybooard is detect by all bios setup
check that keyboard can be plugged into all PS/2 ports made by different manufacturers
check if keyboard is working without any driver installed ( assuming a normal keyboard )
check the total no of keys in that keyboard
check the keyboard wire length

functional case:
check if light glows when num lock is on
check if light glows when caps is on
check if key board can be connnected to CPU using cable.
check the functionality of all the numaric keys on key board(when pressed that particulay number should be dispalyed on the moniter)
check the functionality of all the character keys on key board(when pressed that particulay character should be dispalyed on the moniter)
check the functionality of caps,shift,ctrl,alt key.
check if all the keys on the right side(numaric are functionaning well)
check the functionality of arrown keys
check the functionality with ctrl+(any key) combination of keys
check the functionality with shift +(any key) combination of keys
check the print screen keythere are
check the keyboard color i.e white or black
many more case like this...

Test Cases for Mobile Phone

Write test cases for cell phone

1)Check whether Battery is inserted into mobile properly
2)Check Switch on/Switch off of the Mobile
3)Insert the sim into the phone n check
4)Add one user with name and phone number in Address book
5)Check the Incoming call
6)check the outgoing call
7)send/receive messages for that mobile
8) Check all the numbers/Characters on the phone working fine by clicking on them..
9)Remove the user from phone book n check removed properly with name and phone number
10) Check whether Network working fine..
11)If its GPRS enabled check for the connectivity
12)Check whether the items are displayed properly
13)Click on all the settings in the mobile- verify whether they are functioning properly.
14)Delete the user from the mobile- verify whether the user is deleted from the phonebook
15)Verify it is user friendly or not by clicking the menu.

Test Cases For White Paper

Write Test Cases on white paper.(For e.g. A4 size)

1)Check size of a page.
2)check quality od paper by using different pen and pencils.
3)check use of whitener on paper.
4)check erase.
5)check out the colour of the page
6)to check the paper quality
7)to check the paper thickness
8)to check the A4 sheet empty or not(Writed or Not)
9)to check weather it is folded or not
10)to check the company of the A4 Sheet

Included some pseudo code>

1)Check size of a page.assert
(page.width()==A4.WIDTH)assert(page.width()==A4.HEIGHT)assert(page.colour()==Color.WHITE)assert(page.density()==PAPER.GSM(80))>
2)check quality od paper by using different pen and pencils.[try to write with pen]assert(page.untorn())assert(page.contains(writing_from_pen)>
3)check use of whitener on paper.page.addwhitener(W)assert(page.whitened())>
4)check erase.page.erase(writing_from_pen) assert(page.is_blank())

Test Cases For defined "data type"

write test cases to test a new defined "data type" designed as required for a client? -
asked in a interview, company - "Oracle".

In general, a defined new data type in any language or any package

1 First we have to check the new defined data type is available in that particular
environment or not

2 whether the new defined data type is supported that particular environment or not.

3 whether the new defined data type is accepted specified Data or not.

4 Lower bound for that datatype (as each datatype has some lower limit), also check lower
limit -1

5 Upper bound for that datatype (as each datatype has some upper limit too), also check lower limit +1

6 also check any intermediate value too.All this is valid for scalar data types.

TYPES OF TEST CASES

TYPES OF TEST CASES

Test cases are broadly divided into two types.

1. G.U.I Test Cases.
2. Functional test cases.

Functional test cases are further divided into two types.

1. Positive Test Cases.
2. Negative Test Cases.

GUIDELINES TO PREPARE GUI TEST CASES:

1. Check for the availability of all the objects.
2. Check for the alignments of the objects if at all customer has specified the requirements.
3. Check for the consistence of the all the objects.
4. Check for the Spelling and Grammar.
Apart from these guidelines anything we test with out performing any action will fall under GUI test cases.

GUIDELINES FOR DEVELOPING POSITIVE TEST CASES.

1. A test engineer must have positive mind setup.
2. A test engineer should consider the positive flow of the application.
3. A test engineer should use the valid input from the point of functionality.

GUIDELINES FOR DEVELOPING THE NEGATIVE TEST CASES:

1. A test engineer must have negative mind setup.
2. He should consider the negative flow of the application.
3. He should use at least one invalid input for a set of data.

Traceability Matrix

Requirement No (In RD)

Requirement

Test Case No

What this Traceability Matrix provides you is the coverage of Testing. Keep filling in the Traceability matrix when you complete writing test case’s for each requirement.