Friday, January 16, 2009

Sample-Testing Interview Questions

Testing Interview Questions

1. Difference between system testing and integration testing?

System testing is high level testing, and integration testing is a lower level testing. Integration testing is completed first, not the system testing. In other words, upon

completion of integration testing, system testing is started, and not vice versa. For integration testing, test cases are developed with the express purpose of exercising

the interfaces between the components. For system testing, on the other hand, the complete system is configured in a controlled environment, and test cases are

developed to simulate real life scenarios that occur in a simulated real life test environment. The purpose of integration testing is to ensure distinct components of the

application still work in accordance to customer requirements. The purpose of system testing, on the other hand, is to validate an application's accuracy and

completeness in performing the functions as designed, and to test all functions of the system that are required in real life.

2. How do you conduct peer reviews?

The peer review, sometimes called PDR, is a formal meeting, more formalized than a walk-through, and typically consists of 3-10 people including a test lead, task

lead (the author of whatever is being reviewed), and a facilitator (to make notes). The subject of the PDR is typically a code block, release, feature, or document,

e.g. requirements document or test plan. The purpose of the PDR is to find problems and see what is missing, not to fix anything. The result of the meeting should be

documented in a written report. Attendees should prepare for this type of meeting by reading through documents, before the meeting starts; most problems are found

during this preparation. Preparation for PDRs is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective

than bug detection

3. What is disaster recovery testing?

Disaster recovery testing is testing how well the system recovers from disasters, crashes, hardware failures, or other catastrophic problems

4. What is verification?

A: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans,

code, requirements and specifications; this can be done with checklists, issues lists, and walkthroughs and inspection meetings

Difference between verification and validation.

Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications.

Validation, on the other hand, evaluates the product itself. The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and

meetings. The input of validation, on the other hand, is the actual testing of an actual product. The output of verification is a nearly perfect set of documents, plans,

specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product

What is V&V?

V&V is an acronym for verification and validation

5. Difference between User documentation and User Manual.

user documentation:

User documentation is a document that describes the way a software product or system should be used to obtain the desired results

user manual|:
A: User manual is a document that presents information necessary to employ software or a system to obtain the desired results. Typically, what is described are

system and component capabilities, limitations, options, permitted inputs, expected outputs, error messages, and special instructions


When a distinction is made between those who operate and use a computer system for its intended purpose, a separate user documentation and user manual is

created. Operators get user documentation, and users get user manuals.

6. What is integration testing?

Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components

of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the

components. This activity is carried out by the test team.

Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.

What is incremental integration testing?

Incremental integration testing is continuous testing of an application as new functionality is recommended. This may require that various aspects of an application's

functionality are independent enough to work separately, before all parts of the program are completed, or that test drivers are developed as needed. This type of

testing may be performed by programmers, software engineers, or test engineers.

7. What is usability testing?

Usability testing is testing for 'user-friendliness'. Clearly this is subjective and depends on the targeted end-user or customer. User interviews, surveys, video

recording of user sessions and other techniques can be used. Programmers and developers are usually not appropriate as usability testers.

8. What is regression testing?

The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes introduced

during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are

highlighted and accounted for, before testing proceeds to the next level.

Regression Testing

Regression testing is running all test cases over again after any software change.

Notes:

Because of the high probability that one of the bad outcomes will result from any attempt to fix a bug (or change a program in other ways), it is necessary to do

regression testing. It is not unusual for bugs to be introduced even when programmers only intend to change the documentation (the comments) in a program!
Thus:
Regression testing should occur whenever any sort of change is made to software.

9. What is unit testing?

Unit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. Unit testing is performed after the

expected test results are met or differences are explainable/acceptable.

Testing the piece or block/unit of code by the developer.

10. What is software testing?

A: Software testing is a process that identifies the correctness, completeness, and quality of software. Actually, testing cannot establish the correctness of software. It

can find defects, but cannot prove there are no defects.

Testing the application with the intent of finding defect.

11. What is parallel/audit testing?

Parallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the

operations correctly.

Testing the newly developed system and compare the results with existing system

12. How does a client/server environment affect testing?

Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing

requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance

testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing.

13. How can World Wide Web sites be tested?

Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages,

TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that

run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of

servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing

technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might

include:

- What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server

response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in

house that can be adapted, web robot downloading tools, etc.)?

- Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with

likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?

- What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?

- Will down time for server and content maintenance/upgrades be allowed? how much?

- What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?

- How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?

- What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content,

graphics, links, etc.?

- Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?

- Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??

- How will internal and external links be validated and updated? how often?

- Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up

connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?

- How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?

- How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?

- Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.

- The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site.

- Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.

- All pages should have links external to the page; there should be no dead-end pages.

- The page owner, revision date, and a link to a contact person or organization should be included on each page.

No comments:

Post a Comment