CHAPTER-1
INTRODUCTION
INTRODUCTION
In a Software development Project, faults can be remained at any stage during development. For each phase, there are different techniques for detecting and eliminating those faults that remained in that phase. However, no technique is perfect to detect and eliminate the faults, and it is expected that some of the errors of the earlier phases will finally manifest themselves in the code. This is particularly true because in the earlier phases most of the verification techniques are manual because no executable code exists. Testing is a process during the testing phase of software development process, where the faults remaining from all the previous phases must be detected. Hence, testing performs a very critical role for quality assurance and for ensuring the reliability of software.
During Testing, the program or software to be tested is executed with a set of test data, and the output of the program for the test cases is evaluated to determine if the program has performed as expected. Due to this approach, the dynamic testing can only ascertain the presence of the errors in the program; the exact nature of the errors is not usually decided by testing. Testing forms the first step in determining the errors in a program.
Software Testing is the process of testing the software product. Effective software testing will contribute to the delivery of higher quality software products, more satisfied users, lower maintenance costs, more accurate, and reliable results. However, ineffective testing will lead to the opposite results; low quality products, unhappy users, increased maintenance costs, unreliable and inaccurate results. Hence, Software Testing is necessary and important activity of software development process. It is a very expensive process and consumes one-third to one-half of the cost of a typical development process. The importance of software testing and its with respect to software quality cannot be overemphasized. To quote Deutsch
“The development of software systems involves a series of production activities where opportunities for injection of human fallibilities are enormous, errors may begin to occur at the very inception of the process where the objectives …………..may be erroneously or imperfectly specified, as well as [in] later design and development stages ……… Because of human inability to perform and communicate with perfection, software development is accompanied by a quality assurance activity”
A realistic Definition of testing by Meyer [1] is
“Testing is the process to detect the defects and minimize the risk associated with residual defects”
Software Testing has been a critical element of software quality assurance and represents the ultimate review of specification, design, and code generation. Testing a large system is a complex activity, and like any complex activity it has to broken into smaller activities. Due to this, for a project, incremental testing has generally been performed, in which components and subsystems of the system are tested separately before integrating them to form the system for system testing.
Testing presents an interesting anomaly for the software engineer. During earlier software engineering activities, the engineer attempted to build software from an abstract concept to a tangible product, but now testing has been added to software development process wherein the engineer creates a series of test data that are intended to “demolish” the software that has been built. In fact, testing has been one step in the software process that could be viewed (psychologically, at least) as destructive rather than constructive.
Software engineers are by their nature constructive people. Testing requires that the developer discard preconceived notions of the “correctness” of software just developed and overcome a conflict of interest that occurs when errors are uncovered. Beizer [2] describes this situation effectively when he stated:
There is a myth that if we were really good at programming, there would be no bugs to catch. If only we could really concentrates, if only everyone used structured programming, top-down design, decision table, if programs were written in SQUISH, if we had the right silver bullets, then there would be no bugs. So goes the myth, there are bugs, the myth says, because we are bad at what we do; and if we are bad at it, we should feel guilty about it. Therefore, testing and test case design is an admission of failure, which installs a goodly dose of guilt. And the tedium of testing is just punishment for our errors. Punishment for what? For being human, Guilt for what? For failing to achieve inhuman perfection? For not distinguishing between what another programmer thinks and what says? For failing to be telepathic? For not solving human communications problems that have been kicked around … for forty centuries?
Should testing instil guilt? Is testing really destructive? The answer to these questions is “No!”
Error, Fault, and Failure
Sometimes error, fault, and failure are used interchangeably; ands sometimes refer to different concepts. Let us start by defining these concepts clearly. We follow the IEEE definitions [3] for these terms. The term error has been used in two different ways. It refers to the discrepancy between a computed, observed, or measured valued and the true, specified, or theoretically correct values. That is, error refers to the difference between the actual output of software and correct output. Error has also been used to refer to human action that results in software containing a detect or fault. This definition has been quite general and encompasses all the phases.
Fault is a condition that causes a system to fall in performing its required function. A fault has been the basic reason for software malfunction and is synonymous with the commonly used the term bug. It should be noted that the only fault that software has are “design faults”, there has been no wear and tear in software.
Failure has been the inability of a system or component to perform a required function according to its specification. A software failure occurs if the behaviour of the software is different from the specified behaviour. Failure may be caused due to functional or performance reasons. A failure is produced only when there is a fault in the system. However, presence of a fault does not guarantee a failure. In other words, faults have the potential to cause failures and their presence has always been a necessity but not a sufficient condition for the failure to occur.
Test Oracles
To test any program, we need to have a description of its expected behaviour and a method of determining whether the observed behaviour conforms to the excepted behaviour. For this we need a test oracle.
Figure 1.1: Testing and Test Oracle
A test oracle is a mechanism, different from the program itself that can be used to check the correctness of the output of the program for the test cases. Conceptually, we can consider testing a process in which the test cases are given to the test oracle and the program under testing. The output of the two is then compared to determine if the program behaved correctly for the test cases. This is shown in the Figure 1.1
Test oracles has been necessity for testing. Ideally, we would like an automated oracle which always gives a correct answer. However, often the oracles are human of beings who mostly compute by hand what the output of program should be. As it is often extremely difficult to determine whether the behaviour conforms to the expected behaviour, our “human oracle” May mistakes. As a result, when there is a discrepancy between the result of the program and the oracle, we have to verify the result produced by the oracle, before declaring that there has a fault in the program. This is one of the reasons testing has been so cumbersome and expensive. It has been become important to discuss the difference between testing and debugging:
Testing versus Debugging
Testing and debugging has been often lumped under the same heading, and it’s no wonder that their roles are often confused: for some, the two words are synonymous; for others, the phrase “test and debug” is treated as a single word [4]. The purpose of testing has been to show that a program has bugs. The Purpose of debugging has been to find the faults or misconception that led to the program’s failure and to design and implement the program changes that correct the error. Debugging usually follows testing, but they differ as to goals, methods, and most important, psychology:
1. Testing starts with known conditions, uses predefined procedures, and has predictable outcomes; only whether or not the program passes the test is unpredictable. Debugging starts from possibly unknown initial conditions, and the end cannot be predicted, except statistically.
2. Testing can and should be planned, designed, and scheduled. The procedures for, and duration of, debugging cannot be so constrained.
3. Testing is a demonstration of error or apparent correctness. Debugging is a deductive process.
4. Testing proves a demonstration of error or apparent correctness. Debugging is the programmer’s vindication.
5. Testing, as executed, should strive to be predictable, dull, constrained rigid, and inhuman. Debugging demands intuitive leaps, conjectures, experimentation, and freedom.
6. Much of testing can be done without design knowledge, debugging is impossible without detailed designed knowledge.
7. Testing can often be done by an outsider. Debugging must be done by an insider.
8. Much of test execution and design can be automated. Automated debugging is still a dream.
Although there is a robust theory of testing that establishes theoretical limits to what testing can and can’t do, debugging has only recently been attacked by theorists-and so far there.
1.1 Need of Study
Testing consumes at least half of the labour expended to produce a working program. Few programmers like testing and even fewer like test design–especially if test design and testing take longer than program design and coding. This attitude is understandable. Software is ephemeral: we can’t point to something physical. I think, deep down, most of us don’t believe in software-at least not the way we believe in hardware. If software is insubstantial, then how much more insubstantial does software testing seem? There isn’t even some debugged code to point to when we’re through with test design. The effort we have put into testing seems wasted if the tests don’t reveal bugs.
Software testing is an activity aimed at evaluating an attribute or capability of a program or system and determines that it meets its required results. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. If the testing is done on requirement basis than it keeps the testing efforts on track and measures the application against business user’s needs.
The survey of literature reveals that only a few attempts have been made in the direction of effective software testing techniques and their comparison. There exist a gap in research in this area (detail of survey of literature have been given in chapter 2). Keeping in view this gap in research and its importance the following problem has been undertaken for the purpose of this study.
“SOFTWARE TESTING TECHNIQUES: A COMPARATIVE STUDY”
1.2 Objectives
The present study has been undertaken to study and evaluate the comparison of different software testing techniques.
The specific objectives of the study were as follows:
• Comparative study of different Software ng techniques.
• Effectiveness and supports from different testing techniques.
1.3 Scope
To study the different software testing techniques and to meet out the objective this study was confined to the software companies. The developers of software companies who are working as software testers participated in the study.
1.4 Research Methodology
To study the various software testing techniques a questionnaire has been prepared to meet out the objectives of the study. The data was collected through this questionnaire from 18 different software companies. The analysis will be done by applying various statistical tools.
1.5 Chapter Plan
The ensuing chapter will throw light on survey of literature, research methodology and comparison of testing techniques.
Chapter 1. Introduction
Chapter 2. Survey of Literature
Chapter 3. Research methodology
Chapter 4. Comparison of Testing Techniques
Chapter 5. Conclusion and Summary
Bibliography
Appendix
CHAPTER-2
SURVEY OF LITERATURE
Testing Techniques
One of the main purposes of software testing is to identify defects in the software. Defects in software testing can be defined as variance from requirement or user expectation. Based on this simple definition; it is very easy to categorize defects [4]. For example:
• If system is not functioning properly, it’s a functional defect.
• If system is not performing well, it’s a performance defect.
• If system is not usable, it’s a usability defect.
• If system is not secure, it’s a security defect.
• And so on……
Identify these different defects require different techniques and different types of test cases. Testing is divided into different type of reflect, what kind of defects can be uncovered by those activities. This division also helps management in managing these activities effectively.
Hope everyone can understand importance of this categorization and also how important it is to have understanding of different types. There are many techniques in which software testing can be categorized. Some of them are described as follows:
2.1.1 Categorization of testing based on the Knowledge of System
2.1.1(a) Black Box Testing
Probably this is what most of us practice and is used most widely. This type of testing which is very much close to customer experience. In this type of testing system is treated as close system and test engineer do not assume any thing about how system was created. Black box testing is shown in figure 2.1
As a test engineer if anyone performing black box test data, one thing that everyone need to make sure is that do not make any assumptions about the system based on the knowledge.[5] Assumption created in our mind because of the system knowledge could harm testing effort and increase the chances of missing critical test cases [6].
Purpose of black box testing has been
• Make sure that system has been working in accordance with the system requirement.
• Make sure that system has been meting user expectation.
Figure 2.1: Black box testing
In order to make sure that purpose of black box testing has been met, various techniques can be used for data selection like
• Boundary Value Analyses
• Equivalence Partitioning
• Cause-Effect Graphing
Boundary Value Analysis
Boundary value analysis is the technique of making sure that behaviour of system has been predictable for the input and output boundary conditions. Reason why boundary conditions are very important for testing because defects could be introduced at the boundaries very easily[7]. For example, if anyone want to write code to simulate following condition-
“Input should be greater than equal to 10 and less than 50”
Probably he will write something like
If (input>=10 AND input<50) then
do some
else
do
some thing else.
So, according to this input values from 10 to 49 has been valid, but if has made mistake in specifying the conditions, following things can happen
Input>10 AND input<50 -----------àInput value 10 is invalid now
Input<=10 AND input<50 -----------àInput value 9 is valid now
Input>=10 AND input<=50 -----------àInput value 50 is valid now
Input>=10 AND input>50 -----------àInput value 49 is invalid now
Because it has been very easy to introduce defects at boundaries, boundary values are important. So for above example, at the minimum we should have following test data for boundaries
9, 10, 11 and 48, 49, 50
Lower_boundary-1, Lower_boundary, Lower_boundary+1, and Upper_boundary-1, Upper_boundary, Upper_boundary+1
Equivalence Partitioning
Equivalence partitioning is a software testing technique to minimize number of permutation and combination of input data. In equivalence partitioning, data is selected in such a way that it gives as many different output as possible with minimal set of data. In figure 2.2, each equivalence partition has been shown as an ellipse. Both valid and invalid inputs also form partition [8].
Figure 2.2: Equivalence Partitioning
Now, data from these classes can be representative of all the input values that our software expects. For equivalence classes, it can be assumed that software will behave in exactly same way for any data value from the same partition [9].
So essentially, there are two steps that we need to follow if we want to use equivalence partitioning in our projects-
Identifying equivalence classes or partition
For example, consider a very simple function forwarding grads to the students. This program follows this guideline to award grades
Marks 00-39-----------Grade D
Marks 40-59-----------Grade C
Marks 60-70-----------Grade B
Marks 71-100----------Grade A
Based on the equivalence partitioning techniques, partitions for this program could be as follows
Marks between 0 to 39 –Valid Inputs
Marks between 40 to 59 –Valid Inputs
Marks between 60 to 70 –Valid Inputs
Marks between 71 to 100 –Valid Inputs
Marks less than 0 –Invalid Input
Marks more than100–Invalid Input
Non numeric input –Invalid Input
From the example above, it has been clear that from infinite possible test data (Any value between 0 to 100, infinite vales for >100, <0 and non numeric) data has been divided into seven distinct classes. Even if we take only one data value from these partitions, our coverage will be good.
Cause-Effect Graphing:
One weakness with the equivalence class partitioning and boundary value methods has been observed that they considered each input separately. That is, both concentrate on the condition and data of one input. They have not considered the combinations of input circumstances that may form interesting situations that should be tested. One way to exercise combinations of different input condition has to considered all valid combinations of the equivalence classes of input conditions. This simple approach will result in an unusually large number of test cases, many of which will not be useful for revealing any new errors. For example, if there are n different input conditions, such that any combination of the input conditions which is valid, we will have 2n test cases [10].
Cause –effect graphing [8] is a technique that aids in selecting, in a systematic way, a high-yield set of test cases. It has a beneficial effect in pointing out incompleteness and ambiguities in the specifications. The following process has been used to drive test cases.
1. The causes and effects in the specifications has been identified. A cause has a distinct input condition or equivalence class of input conditions. An effect has an output or a system transformation. For instance, if a transaction to a program causes a master file to be updated, the alteration to the master file is a system transformation; a confirmation message would be an output condition. Causes and effects has been identified by reading the specification word by word and underlying words or phrases that describe causes and effects. Each causes and effects has been assigned a unique number.
2. The semantic content of the specification has been analysed and transformed into a Boolean graph linking the causes and effects. This is the cause effect graph.
3. The graph has been annotated with constraints describing combinations of causes and/or effect that are impossible because of syntactic or environmental constraint.
4. By methodically tracing state conditions in the graph, the graph has been converted into a limited entry decision table. Each column in the table represents a test case.
5. The column in the decision table has been converted into test cases. The basic notation for the graph is shown in figure 2.3
Think of each node as having the value 0 or 1; 0 represents the ‘absent state’ and 1 represents the present state. The identity function states that IF c1 is 1, e1 is 1; else e1 is 0. The NOT function state that if c1 is 1, e1 is 0 else e1 is 1. The OR function states that if c1 or c2 or c3 is 1, e1 is 1; else e1 is 0. The AND function states that if both c1 and c2 are 1, e1 is 1; else is 0. The AND or OR functions are allowed to have any number of inputs.
Myers[10] explained this effectively with the following statement. “The characters in column 1 must be an A or B. The character in column 2 must be a digit. In this situation, the file update is made. If the character in column 1 is incorrect, message x is issued. If the character in column 2 is not a digit, message y is issued”.
Figure 2.3: Basic cause effect graph symbols
The causes are
c1: character in column1 is A
c2: character in column 1 is B
c3: characters in column 2 is a digit
And the effects are
e1: update made
e2: message x is issued
e3: message y is issued
The cause effect-graph is shown in figure 2.4
Although the graph in figure 2.4 represents the specification, it does not contain an impossible combination of causes-it has been impossible for both causes c1 and c2 to be set to 1 simultaneously. In most programs, certain combinations of causes are impossible because of syntactic or environmental considerations.
Figure 2.4: Sample cause effect graph
2.1.1(b) White Box Testing
White box testing has been observed a very different in nature from black box testing. In black box testing, focus of all the activities is only on the functionality of system and not on what is happening inside the system[11].
Purpose of white box testing has to make sure that
• Functionality is proper.
In this approach, test group must possess complete knowledge about the internal structure of the source code. For instance, if the first statement of the code is “if (x<=500)”, then we might want to try testing the program with a test data of 500. Therefore, the knowledge of internal structure of the source code can be used to find the number of test cases required to guarantee a given level of test coverage. It would not be advisable to release software which contained untested statements and the consequence of which might be disastrous. This goal seems to be easy, but simple objectives of white box testing are harder to achieve than may appear first glance. There are number of methods associated with it.
• PATH TESTING
Path testing has the name given to a group of test techniques based on judiciously selecting a set of test panic through the program. If the set of paths has been properly chosen, then it means that we have achieved some measure of test thoroughness. For example, pick enough paths to assure that every source statement has been executed at least once. It has been most applicable to new software for module testing or unit testing. It requires complete knowledge of the program’s structure and used by developers to unit test their own code. The effectiveness of path testing rapidly deteriorates as the size of the software under test increases [2].
This type of testing involves:
1) Generating a set of paths that will cover every branch in the program.
2) Finding a set of test case that will execute every path in this set of program paths. The two steps are not necessarily executed in sequence. Path generation can be performed through the static analysis of the control flow and can be automated. Typically, a program control flow has been analysed on graphical representation, called a flow graph. Therefore, flow graph has been a directed graph in which nodes are either entire statements or fragments of a statement, and edges represent flow of control.
Graph Theory Concepts
The best of known form of white box testing has been based on a construct known as a decision to decision path [13] that gives a graphical representation of the program’s control flow. The name refers to a sequence of statement that begins with the “out way” of a decision statement and ends with the “in way” of the next decision statement. There are no internal branches in such a sequence.
A program’s structure is conveniently analysed by means of a flow graph also called a directed graph. A directed graph or digraph G= (V, E) consists of a set V of nodes or vertices, and a set E of directed edges or area, where a directed edge e=(T(e), H(e)) is an ordered pair of adjacent nodes, called Tail and Head of e, respectively; we say that e leaves T(e) and enters H(e). If H(e)=T(e’), e and e’ are called adjacent nodes. For a node n in V, in-degree (n) is the numbers of arcs entering and out-degree (n) is the number of arcs leaving it [12].
Cyclomatic Complexity:
This approach has been used to find the number of independent paths through a program. This provides us the upper bound for the number of tests that must be conducted to ensure that all statement has been executed at least once and every condition has been executed on its true and false side. An independent path is any path is any path through the program that introduces at least one new set of processing statements or a new condition.
McCabe’s Cyclomatic metric [13] V(g) of a graph G with n vertices, e edges, and p connected components is V(g)=e-n+2P.
Graph Matrices
A graph matrix is a square matrix with one row and one column for every node in the graph. The size of the matrix(i.e. the number of one rows and columns) is equal to the number of nodes in the flow graph [2]. In the graph matrix, there is a place to put every possible direct connection between any node and any other node.
• Data Flow Testing
Data flow testing refers to forms of structural testing that focus on the point which variable receives values and the points at which these values are used. It serves as a reality check on the path testing and some proponents see this approach as a form of path testing [5]. It uses the flow graph to explore the unreasonable things that can happen to data. Variable can be created, killed and/or used. These can be used in distinct ways: in calculation or as a part of control flow predicate. The following symbols denote these possibilities [2]:
d :defined, created, initialised, etc.
k: killed, undefined, released
u: used for something
c: used in a calculation
p: used in a predicate
1. Defined: A variable is defined explicitly when it appears in a data declaration or implicitly when it appears on the left hand side of an assignment statement.
2. Killed or Undefined: A variable is killed or undefined when it is released or otherwise made unavailable.
3. Usage: A variable is used for computation (c) when it appears on the right hand side of an assignment statement or as a pointer or as a part of a pointer calculation or a file record is read or written and so on.
• Mutation Testing
Mutation testing is a fault based technique that is similar to fault seeding, except that mutations to program statements are made in order to determine properties about test cases. It is basically a fault simulation technique. In this technique, multiple copies of a program are made, and each copies is alter; this alter copy is called a mutant. Mutants are executed with test data to determine whether the test data capable of detecting the change between the original program and the mutated program. A mutant that is detected by a test case is termed “killed” and the goal of mutation procedure is to find a set of test cases that are able to kill groups of mutant programs [16].
Mutants are produced by applying mutant operators. An operator is essentially a grammatical rule that changes a single expression to another expression.
When we mutate code there need to be a way of measuring the degree to which the code has been modified for example, if the original expression is x+1 and the mutant for that expression is x+2, that is a lesser change to the original code than a mutant such as(c*22), where both the operand and the operator has been changed. We may have a ranking scheme, where a first order mutant is a single change to an expression, a second order mutant is a mutation to a first order mutant, and so on. High order mutants become intractable and this in practice only low order mutants is used.
2.1.1(c) Gray Box Testing:
Gray box testing technique has been observed as a combination of black box and white box testing. Intention of this testing is to find out defects related to bad design or bad implementation of the system[17].
In Gray box testing, test engineer has been equipped with the knowledge of system and designs test cases or test data based on system knowledge.
In this case, in the presence of implementation detail, it might test web form with valid/invalid mail IDs and different field of interests to make sure that functionality is intact.
But, if he know the implementation detail, he know that system is making following assumptions
• Server will never get invalid mail ID.
• Server will never send mail to invalid ID.
• Server will never receive failure notification for this mail.
So as part of Gray box testing, in the above example he will have a test data on clients where Java Script are disabled. It could happen due to any reason and if it happens, validation can not happen at the client site. In this case, assumption made by the system are violated and
• Server will get invalid mail ID.
• Server will send mail to invalid ID.
• Server will receive failure notification.
Hope we understood the concept of Gray box testing and how it can be used to create different test cases or data points based on the implementation details of the system.
2.1.2 Categorization of testing based on the Time
During testing, the major activities are centred on the examination and modification of source code. We proceed in levels from individual modules to the entire software system. At one end, we attempt to test modules in all possible ways so as to detect the errors. From there, we combine and aggregates of modules and test their detailed structure and functions. At the end, we may ignore the internal structure of the software and concentrates on how it responds to typical kind of operations that will be requested by the user.
These phases of testing are usually referred to as unit testing, integration testing, and system testing and acceptance testing. As shown in figure 2.5, this type of testing is for testing a specific entity. Unit testing is done to test the source code.
Integration testing is done to test the design. System testing is done to test the SRS. And, finally, the acceptance testing is done to test the client/ user requirements.
2.1.2(a) Unit Testing:
It has been heard some people saying that Unit Testing has been done primarily by the developers and test engineer need not know about Unit Testing. But this is not the case, Unit testing has been observed as important for test engineers as it is for developers [18].
Probably developers will write the unit test cases but understanding of the framework and unit testing can certainly help everyone in designing the flow for unit test cases, generating test data and making good reports. All these activities will ultimately help in the long run as product quality will increase significantly because of the efforts put in on the unit testing.
Figure 2.5: Categorization of testing based on time
So if one thinks as a test engineer, he should also learn/know about unit testing read on.
• Unit testing is the process of taking a module and running it in isolation from the rest of the software product by using prepared test cases and comparing the actual results predicted by the specifications and design of the module. One purpose of testing is to find as many errors in the software as practical. There are number of many reasons in support of unit testing than the entire product [20].
• The module is small enough that we can attempt to test it in some demonstratively exhaustive fashion.
• Confusing interaction of multiple errors in widely different parts of the software is eliminated.
There are problems associated with testing a module in isolation. How do we run a module without anything to called by it or, possibly, to output intermediate values obtained during execution? One approach is to construct an appropriate driver routine to call it and, simple stubs to be called by it, and to insert output statements in it.
Stubs serve to replace modules that are subordinate to the module to be tested. A stub or dummy subprogram uses the subordinate module’s interface, may do minimal data manipulation, prints verification of entry, and returns [21].
2.1.2(b) Integration Testing
Objective of integration testing is to make sure that the interaction of two or more components produces results that satisfy functional requirement. In integration testing, test cases are developed with the express purpose of exercising the interface between the components [19].
Integration testing can also be treated as testing assumption of fellow programmer. Assumptions can be made for how we will receive data from different components and we have to pass data to different components [15]. During Unit Testing, these assumptions are not tested. Purpose of unit testing is also to make sure that these assumptions are valid. There could be many reasons for integration to go wrong, it could be because
• Interface Misuse- A calling component calls another component and makes an error in its use of interface, probably by calling/passing parameters in the wrong sequence.
• Interface Misunderstanding- A calling component makes some assumption about the other components behaviour which are incorrect.
Integration testing can be performed in different ways based on the from where we start testing and in which direction we are progressing.
• Big Bang Integration Testing
• Top Down Integration Testing
• Bottom Up Integration Testing
• Hybrid Integration Testing
Big Bang Integration Testing:
In big bang integration testing, individual modules of the program has not integrated until every thing is ready. In this approach, the program has been integrated without any formal integration testing, and then run to ensure that all the components are working properly [22].
The application of this method often simply leads the programmer to have to re-separate parts of the program to find the cause of the errors, thereby effectively performing a full integration test although in a manner which lacks the controlled approach of the other methods.
Disadvantages: There are many disadvantages of this big bang approach
• Defects present at the interfaces of the components are identified at very late stage.
• It has been very difficult to isolate the defect found, as it has been very difficult to tell whether defect is in component or interface.
• There is high probability of missing some critical defects which might surface in production.
• It has been very difficult to make sure that all the cases for integration testing are covered.
Top-Down Integration Testing
Top down integration testing has an incremental integration testing technique which begins by testing the top level module and progressively adds in lower level module one by one. Lower level modules are normally simulated by stubs which mimic functionality of lower level modules. As he adds lower level code, he will replace stubs with the actual components [23].
Top –Down integration can be performed and tested in breadth first or depth first manner.
Advantages:
• Drivers do not have to be written when top down testing is used.
• It provides early working module of the program and so design defects can be found and corrected early.
Disadvantages:
• Stubs have to be written with utmost care as they will simulate setting of output parameters.
• It is difficult to have other people or third parties to perform this testing; mostly developers will have to spend time on this.
Bottom-Up Integration Testing
In bottom up integration testing, module at the lowest level are developed first and other modules which go towards the ‘main’ program are integrated and tested one at a time. Bottom up integration also uses test drivers to drive and pass appropriates data to the lower level modules. As and when code for other module gets ready, these drivers are replaced with the actual module [24].
In this approach, lower level modules are tested extensively thus make sure that highest used module is tested properly.
Advantages:
• Behaviour of the interaction point is crystal clear, as components are added in the controlled manner and tested repetitively.
• Appropriate fir applications where bottom up design methodology is used.
Disadvantages:
• Writing and maintaining test drivers or harness is difficult than writing stubs.
• This approach is not suitable for the software development using top down approach.
Hybrid Integration Testing
Top-down and bottom-up, both the types of testing have their advantages and disadvantages. While in top-down integration testing it has been very easy to follow the top-down software development process at the same time in bottom-up testing, the code has been used mostly is tested repetitively[25].
In hybrid integration testing approach, we try to leverage benefits of top-down and bottom-up, both the types of testing.
While it has been important to take benefit of both the approaches using hybrid integration techniques, we need to make sure that we do it thoroughly. Otherwise it will be very difficult to identify which modules are tested using top-down and which one are tested using bottom-up. It might be possible that end up missing some modules altogether in this case if proper caution is not exercised.
2.1.2(c) System Testing
System testing is probably the most important phase of complete testing cycle. This phase is started after the completion of other phases like Unit, Component and Integration testing. During the system phases, non functional testing also comes in to picture and performance, load, stress, scalability all these types of testing are performed in this phase [26].
Utmost care is exercise for the defect uncovered during system phase and proper impact analysis should be done before fixing the defect. Sometimes, if business permits defects are just documented and mentioned as the known limitation instead of fixing it.
System testing phase also prepares team for more users centric testing i.e. User Acceptance Testing.
Entry Criteria
• Unit, component and Integration test are complete.
• Defects identified during these test phase are resolved and closed by QE team.
• Teams have sufficient tools and resources to mimic the production environment.
Exit Criteria
• Test cases execution reports shows that functional and non-functional requirement are met.
• Defects found during the System testing are either fixed after doing thorough impact analysis or are documented as known limitation.
2.1.2(d) User Acceptance Testing
Acceptance Testing is the formal testing conducted to determine whether a software system satisfies its acceptance criteria and to enable buyer to determine whether to accept the system or not.
Acceptance testing has been designed to determine whether software is fit for use or not. Apart from functionality of application, other factors related to business environment also play an important role [27].
User acceptance testing has been different from System Testing. System Testing is invariably performed by the development team which includes developer and tester. User acceptance testing on the other hand should be carried out by the end user. This could be in the form of
• Alpha Testing- Where test are conducted at the development site by the end users. Environment can be controlled a little bit in this case.
• Beta Testing-Where test are conducted at development site by the end users. Environment can be controlled a little bit in this case.
In both cases, these testing might be assisted by software testers.
A well defined acceptance plan will help development/QE teams by defining user’s need during software development. Acceptance Test plan must be created or reviewed by customer. Development team and customer should work together and make sure that they have
• Identify interim and final products for acceptance, acceptance criteria and schedule.
• Plan how and by whom each acceptance activities will be performed.
• Schedule adequate time for buyer’s staff to examine and review the product.
• Prepare the acceptance plan.
• Perform formal acceptance testing at delivery.
• Make a decision based on the results of acceptance testing.
Entry Criteria
• System testing is completed and defects identified are either fixed or documented.
• Acceptance plan is prepared and resources have been identified.
• Test environment for the acceptance testing is available.
Exit Criteria
• Acceptance decision is made for the software.
• In case of any caveats, development team is notified.
2.1.3 Categorization of testing based on the Purpose
This can be classified further into Functional Testing and Non Functional Testing.
2.1.3(a) Functional Testing
In functional testing, the focus of testing activities has been on functional aspects of the system. In functional testing, test cases are written to check the expected output. Functional Testing is normally performed in all the test phases from unit to system testing.
The following types of test activities are normally performed under Functional Testing.
Installation Testing
Installation Testing has been one of the most important parts of the testing activities. Installation is the first Interaction of user with our product and it is very important to make sure that user does not have any trouble in installing the software.
The type if installation testing is done, will be affected by lots of factors likes
• What platforms and operating systems we support?
• How will we distribute the software?
Installation testing for different platforms
Process of installing our software could be different for different platforms. It could be a need GUI for windows or plain command line for UNIX boxes.
Usually installer asks a series of question and based on the response of the user, installation changes. It is always a good idea to create a tree structure of the option available to the user and cover all unique paths of installation if possible.
Person performing installation testing should certainly have information on what to expect after installation is done. Tool to compare file system, registry, DLLs etc. are very handy in making sure that installation is proper.
Most of the installers support the functionality of silent installation, this also need thorough testing. Main thing to look for here is the config files that are uses for installation. Any changes made in config file should have proper effect on the installation.
If installation has dependent on some other components like database, server etc. test data should be written specifically to address this.
Installation testing based on distribution:
Apart from the sample cases covered above, special cases should be written to test how software will be distributed.
If software is distributed using physical CD format, test activities should include following things—
• Test case should be executed from ISO images, if getting physical CD is not possible.
• Test case should be present to check the sequence of CDs used.
• Test cases should be present for the gracefully handling of corrupted CD or image.
If test cases are distributed from internet, test cases should be included for
• Bad network speed and broken connection.
• Firewall and security related.
• Size and approximate time taken.
• Concurrent installation/downloads.
Regression Testing
As some one has said, changes are the only thing constant in this world. It holds true for software also, existing software are either being changed or removed from the shelves. These changes could be due to any reason. There could be some critical defects which need to be fixed; there could be some enhancements which have to happen in order to remain in the competition [28].
Regression testing is done to ensure that enhancement, defect fixes or any other changes made to the software have not broken any existing functionality.
Regression testing is very important, because in most places these days iterative development, shorter cycle is used with some functionality added in every cycle. In this scenario, it makes sense to have regression testing for every cycle to make sure that new features are not breaking any existing functionality.
During the regression cycle it becomes very important to select the proper test cases to get the maximum benefits. Test cases for regression testing should be selected based on the knowledge of
• What defect fixes, enhancement or changes have gone into the system?
• What is the impact of these changes on the other aspect of system?
Focus of the regression testing is always on the impact of the changes in the system. In most of the organization, priority of the regression defect is always very high. It is normally included in the exit criteria of the product that it should have zero regression defects.
Regression testing is a continuous process and it happens after every release. Test cases are added to the regression suite after every release and repetitively executed for every release because test cases of regression suite are executed for every release; they are perfect candidates for the automation.
Backward Testing
In any software program, new release or new versions are inevitable. Organization spends lots of money and resources to improvise the existing software. Continuous improvisation is necessary for any software product so that they can remain competitive in the market. On an average, every software is upgraded at least once in every year.
This arises the need of testing different aspect of software known as Backward and upgrade testing. Considerable efforts are spent in the making sure that software can be upgraded without affecting user in any adverse ways. With every new version of the product, one of the main criteria should be to make sure that whatever efforts users have spent on the older version, should not be wasted.
Though Backward and upgrade testing are different, but both are very much similar as we will understand in the following section.
Backward Testing
Testing that ensures that new version of the product continues to work with the assets created from older product is known as backward compatibility testing. For example, consider a simple case of Excel worksheet. Suppose it has created a very complex excel sheet to track our projects schedule, resources, expenses, future plans etc. Now if it upgrade from Excel 2000 to Excel 2003 and some of the functional stop working, it will not be delighted with this.
Upgrade Testing
Scope of upgrade testing become a bit broader than backward compatibility testing. In upgrade testing, apart from making sure that assets created with older versions can be used properly, we also make sure that user’s learning is not challenged. We also make sure that upgrade process is simple and users do not have to invest lots of time and resources to upgrade the product. Following items can be included in upgrade testing, but not limited to.
Accessibility Testing
Accessibility testing has the technique of making sure that our product is accessibility compliant. There could be many reasons why our products need to be accessibility compliant as stated above.
Typical accessibility problems can be classified into following four groups, each of them with different access difficulties and issues:
Visual Impairments
Such as blindness, low or restricted vision, or colour blindness. User with visual impairments uses assistive technology software that reads content loud. User with weak vision can also make text larger with browser setting or magnificent setting of operating system.
Motor skills
Such as the inability to use a keyboard or mouse, or to make fine movements.
Hearing impairments
Such as reduced or total loss of hearing.
Cognitive abilities
Such as reading difficulties, dyslexia or memory loss.
For accessibility testing to succeed, test team should plan a separate cycle for accessibility testing. Management should make sure that team have information on what to test and all the tools that they need to test accessibility are available to them.
Typical test cases for accessibility might look similar to the following examples.
Make sure that all functions are available via keyboard only (do not use mouse). Make sure that information is visible when display setting is changed to High Contrast modes. Make sure that screen reading tools can read all text available and every picture/Image have corresponding alternate text associated with it.
Internationalization Testing
World is flat. If we are reading this page, chances are that we are experiencing this as well. It is very difficult to survive in the current world if we are selling our product in only one country or geological region. Even if we are selling in all over the world, but our product is not available in the regional languages, we might not be in a comfortable situation.
Internationalization testing is the processes, which ensure that product functionality is not broken and al the messages are properly externalized when used in different languages and local.
Internationalization testing is also called I18N testing, because there are 18 characters between I and N in internationalization.
Internationalization, globalization and localization all these words are normally used together. Though the objective of theses word is same, i.e. to make sure that product is ready to the global market, but they serve different purpose and have different meaning. We will explore these term in more detail.
Globalization:
Globalization is the process of developing, manufacturing and marketing software products that are intended for world wide distribution. An important feature of these products is that, they support multiple languages and locale.
Globalization is achieved through the internationalization and localization.
Internationalization:
In I18N testing, first step is to identify all the textual information in the system. This includes all the text present on the applications GUI, any test/messages that the application is producing including error messages/warnings and help/documentation etc.
Main focus of the I18n testing is not to find functional defects, but to make sure that product is ready for the global market. As in other known functional testing it is assumed that functional testing has been completed and all the functionality related defects are identified and removed.
Application Programming Interface (API) Testing
Before dwelling into the subject of API testing, we should understand what the meaning of API or Application Programming Interface is an API is a collection of software function and procedures, called API calls that can be executed by other software application.
API testing is mostly used for system which has a collection of API that need to be tested. The system could be system software, application software or libraries.
API testing is different from other testing type as GUI is rarely involved in API testing. Even if GUI is not involved in API testing, we still need to set up initial environment, invoke API with required set of parameter and then final analyze the result.
Output of API could be some data or status or it can just wait for some other call to complete in a synchronized environment. Most of the test cases of API will be based on the output, if API
• Return value based on the input condition
This is relatively simple to test as input can be defined and results can be validated against expected return value. For example, it is very easy to write test cases for int add (int a, int b) kind of API. We can pass different combinations of int a and int b and can validate these against known results.
• Does not return anything
For cases like these we will probably have some mechanism to check behaviour of API on the system. For example, if we need to write test cases for delete (List Element) function we will probably validate size of the list, absence of list element in the list.
• Trigger some other API/event/interrupt
If API is triggering s, the event or raising some interrupt, then we need to listen for these events and interrupt listener. We test suite should call appropriate API and asserts should be on the interrupts and listener.
• Update data structure
This category is also similar to the API category which does not return anything. Updating data structure will have some effect on the system and that should be validated. If we other means of accessing the data structure, it should be used to validate that data structure is updated.
2.1.3(b) Non Functional Testing
In non–functional Testing, the focus of the testing activities is on non functional aspects of the system. Non function testing is normally carried out during the System Testing phase only. The focus of non functional testing is on the behaviour and user experience of the system. Non Functional Testing can be dived into different types:
Performance Testing
In the current era, when we have hardly have any stand alone desktop application performance, Load and stress testing becomes key to the success of our application. Performance testing comes under non functional testing.
Performance testing
Performance testing is conducted after the completion of functional testing. Performance testing is normally conducted during the System Testing Phase. Objective of the performance testing is not to find functional defect in the system, it is assumed that functional defect has been identified and removed from the system.
Performance testing is usually conducted for web applications. Main objective of performance testing is to get information with respect to the response time, throughput time and utilization under a given load. In order to perform performance testing on the web application, we need to know at least these two things.
• Expected load it could be in terms of concurrent user or HTTP connection.
• Acceptable response time.
During performance testing whole system can be optimized at various levels. It can be optimized at
• Application level
• Database level
• Operating system level
• Network level
Performance testing can be performed as a white box or black box activity. In White box approach, system is inspected and performance tuning is performed where ever possible to improve performance of the system. In Black box approach, test engineer can use tools that simulate the concurrent users/HTTP connections and measure response time.
Usability Testing
Software usability testing is an example of non functional testing. Usability testing evaluates how easy a system is to learn and use. There are enormous benefits of the usability testing but still there is not much awareness about the subject.
Benefits of usability testing can be summarised as
• It’s easier for sales team to sell a highly usable product.
• Usable products are easy to learn and use.
• Support cost is less for usable product.
According to the ISO definition the usability is the extent to which a product can be used by specified user to achieve a specified goal with effectiveness, efficiency and satisfaction in a specified context of use.
It is very important to understand following before starting any usability testing activities.
Specified user - who will be the targeted user population? A useable system for a businessman could be highly unusable for the farmers. Targeted audience should be identified clearly.
Specified goal- usability testing team should understand the primary goal of the system. Usable system will rarely have fancy functionality, as it might be irrelevant to 80% of the user.
Effectiveness and efficiency- These can be measure in term of accuracy and completeness with which user achieve specified goal in minimum amount of time.
Context of use- It is very important to understand the context in which software will be used before usability testing. Usability testing of a video game will be different from sophisticated software used by person in a space shuttle.
Main crux of the usability testing is to make sure that a user can use the software with ease and can complete specified tasks effectively and efficiently. Usability testing can be divided into usability testing with user and without user.
Security Testing
Security testing has been very important in today’s world, because of the way computer and internet has affected the individual and organization. Today, it has become very difficult to imagine world without internet and latest communication system. All these communication systems increase efficiency of individual and organisation by multi-fold[29].
Primary purpose of security testing has to identify the vulnerabilities and subsequently repairing them. Typically, security testing is conducted after the system has been developed, installed and is operational. Unlike other types of testing, network security testing is performed on the system on the periodic basis to make sure that all the vulnerabilities of the system are identified.
Network security testing can be further classified into following types
• Network scanning
• Vulnerability scanning
• Password cracking
• Log review
• File integrity checkers
• Virus detection
• War dialling
• Penetration testing
None of these tests provide a complete picture of network security. We will need to perform a combination of these techniques to a ascertain status of the Network Testing Activities.
• Network Scanner
Network scanner involves using a port scanner to identify all hosts potentially connected to an organization’s network, the network services operating on those hosts, such as the file transfer protocol (FTP) and hypertext transfer protocol (HTTP), and the specific application running the identified services, such as Internet Information Server (IIS) and Apache for the HTTP services [30]. The result of the scan is a comprehensive list of all active host and services, printers, switches, and routers operating in the address space scanned by the port-scanning tool, i.e. any device that has a network address or is accessible to any other device.
Purpose of network port scanning is to
• Check for unauthorized hosts connected to the organization’s network.
• Identify vulnerable services.
• Identify deviation from the allowed services defined in the organization’s security policy.
• Prepare for penetration testing.
• Assist in the configuration of the intrusion detection system (IDS).
• Collect forensic evidence
• Vulnerability testing
Vulnerability scanners take the concept of a port scanner to the next level. Like a port scanner, a vulnerability scanner identifies hosts and open ports, but it also provides information on the associated vulnerabilities. Most vulnerability scanners also attempt to provide information on mitigating discovered vulnerabilities [31].
Vulnerability scanner can also help identify out of date software versions, applicable patches or system upgrades, and validate compliance with, or deviation from, the organization’s security policy. To accomplish this, vulnerability scanners identify operating systems and major software application running on the hosts and match them with known exposures. Scanner employ large databases of vulnerabilities to identifies flaws associated with commonly used operating systems and applications.
• Password Cracking
Password cracking programs can be used to identify weak passwords. Password cracking verifies that users are employing sufficiently strong passwords. Passwords are generally stored and transmitted in an encrypted form called hash. When a user logs on to a computer/system and enters a password, a hash is generated and compared to a stored hash. If the entered and stored hashes match, the user is authenticated [32].
An automated password cracker rapidly generates hashes until a match is found. The fastest method for generating hashes is a dictionary attack that uses all words in a dictionary or text file. Another method of cracking is called a hybrid attack, which builds on the dictionary method by adding numeric and symbolic characters to dictionary words. Depending on the password cracker being used, this type of attack will tries a number of variations. The attack tries common substitutes of characters and numbers of letters.
• Log Review
Various system logs can be used to identify deviations from the organization’s security policy, including firewall logs, IDS logs, server logs, and any other logs that are collecting audit data on systems and networks. While not traditionally considered a testing activity, log review and analysis can provide a dynamic picture of ongoing system activities that can be compared with the intent and content of security policy.
• File Integrity
A file integrity checker computes and stores a checksum for every guarded file and establishes a database of file checksums. It provides a tool for the system administrator to recognize changes to files, particularly unauthorized changes. Stored checksums should be regularly to test the current value against the stored value to identify any file modifications [33]. A file integrity checker capability is usually included with any commercial host- based intrusion detection system.
• Virus Detection
The virus detector installed on the network infrastructure is usually installed on mail servers or in conjunction with firewalls at the network border of an organization. Server based virus detection program can detect viruses before they enter the network or before users download their e-mail [34]. Another advantage of server based virus detection is that all virus detectors require frequent updating to remain effective. This has been much easier to accomplish on the server-based programs due to their limited number relative to client hosts.
The other type of virus detection software is installed on end-user machines. This software detects malicious code in e-mails, floppies, hard disks, documents and the like but only for the local host. The software also sometimes detects malicious code from web sites. This type of virus detection program has less impact on network performance but generally relies on end-users to update their signatures, a practice that is not always reliable.
• War Dialling
In a well-configured network, unauthorized modems are often an overlooked vulnerability. These unauthorized modems provide a means to bypass most or all of the security measure in place. There are several software packages available that allow attackers and network administrators to dial large blocks of phone numbers in search of available modems [35]. This process is called war dialling. A computer with four modems can dial 10,000 numbers in a matter of days. Certain war diallers will even attempt some limited automatic hacking when a modem is discovered.
• Penetration Testing
Penetration testing is security testing in which evaluator’s attempts to circumvent the security features of a system based on their understanding of the system design and implementation. The purpose of penetration testing is to identify methods of gaining access to a system by using common tools and techniques used by attackers [36]. Penetration testing should be performed after careful consideration, notification, and planning.
2.1.4 Categorization of testing based on the Execution
Testing can also be categorized on the bases of execution. Execution could be in the form of verification or static analysis or it could be validation or dynamic analysis. Verification and validation can be categorized further according to how it is done. Barry Boehm defines these terms based on the answer to the following questions:
Verification: are we building the product right?
Validation: are we building the right product?
2.1.4(a) Verification
In very simple terms, verification is the human examination or review of the work product. There are many forms of verification which ranges from informal verification to formal verification. Verification can be used for various phases in SDLC and can be in the form of
• Walkthrough or buddy-checking.
• formal inspection or reviews
Inspection or reviews are more formal and conducted with the help of some kind of checklist. The steps in the inspections or reviews are:
• Overview and scrutiny of the document.
• Application of a checklist specially prepared for development plan, SRS, design and architecture.
• Noting observation: ok, not-ok, with comments on mistake or inadequacy.
• Repair-rework
• Follow up to ensure that observations are completely dealt with.
The difference between a walkthrough and inspection is that the former is less formal and quick: whereas inspection is more formal, take more time, and is far systematic, due to the use of checklists. Both are costly but the cost incurred is comparatively much lower than the cost of repair at a much later stage in the development cycle.
2.1.4(b) Validation
Validation or dynamic analysis is the most frequent activity that as a tester it performs. Whether he is doing black box testing, non functional testing or any type of testing, chances are that he is performing validation or dynamic analyses. There are many ways in which testing can be executed, for example.
• Manual Scripted Testing
• Exploratory Testing
• Automated Testing
• Model Based Testing
Manual Scripted Testing
Manual testing has been the oldest and most rigorous type of software testing. In this particular type of testing, test cases are designed and reviewed by the team before executing it. There are many variation of this basic approach, test cases can be created at the basic functionality level or they can be created at the scenario level [37].
Value of the scripted testing has been questioned by many experts in the field and they consider scripted testing as a waste of resources, in most of the situations. They claim that scripted manual testing closes the mind of tester and inhibit them to use their creativity. Also, this approach is very heavy on the documentation and require considerable amount of resources to create the test scripts in the first place and they often get out dated because of the inevitable changes in the system.
This type of testing is mostly seen in places where a waterfall methodology along with the V-Model is followed. This approach is also suitable for projects where requirements are frozen and likelihood of changes in those requirements is very little. This might also be suitable for the situations where safety or regulatory situations demand proof of testing in this particular way.
Exploratory Testing
Exploratory testing is defined as simultaneous learning, test design and execution by its one of the most prominent proponent, James Bach. Before 1990 and even till now, in some parts of the industry, exploratory testing is also known as ad-hoc testing [38].
This term was coined in early 1990’s by the context driven testing school community. Dr. Kaner emphasized the thought process involved in unscripted testing in his one of the best book on software testing called, “Testing Computer Software”.
According to the many experts in software testing field, in terms of finding important defects exploratory testing is much more powerful than traditional scripted testing. As compared to traditional manual scripted testing, it involves less overhead. With the continuously increasing adoption of agile methodologies, adoption of exploratory testing will increase in future.
One of the main problems with implementing exploratory testing practices is the measurement. In many situations, management is interested in checking the progress and need some evidence of coverage and execution because of various reasons. To address this need, session based test management can be used, which makes it easier to audit and measure exploratory testing efforts.
Automated Testing
Automated testing is the concept of automation applied in the testing. Automation can be applied on the various parts of the testing process including test cases management, defects management, reporting, and test case execution and so on.
This section will look at the automation in the context of execution of test cases without or minimal human intervention only.
Typical test automation comprises of following steps –
Setting up the test pre conditions.
Execute the tests
Comparing results with the known oracles
Reporting results
If planned properly, test automation could be one of the most important testing activities and can yield tremendous gains. There are many tools available in the market for test automation from all the major tool vendors. Many tools are available from the open-source community as well. Most of these tools are focused on the automated record and playback types of testing.
Model Based Testing
It is true that in order to test any system effectively, we need to first understand what the implementation under test is supposed to do. With continuously increasing complex software, testers can use models to understand the system and support test design process.
In Model Based Testing, input and state combination can be enumerated systematically. In order to understand the model, it is important to understand the concept behind model first. According to Binder (2000), every model has four main elements: subject, point of view/theory, representation and technique.
Subject - Is the core idea around which models are created. In testing, typically models of system under test help us in selecting effective test cases.
Point of View - A model must be based on the frame of reference and principle that can guide identification of relevant issues and information. Software testing models typically express required behaviour and focus aspects of system suspected to be buggy. Models from the software testing point of view, must establish the information necessary to produce and evaluate test case.
Representation - A modelling technique must have a means of expressing a particular model. This may be a wire frame, cad model or anything else.
Technique - Models can be developed by using any technique using UML notion for the formal ones or even simpler artefacts for general use.
Related Research in Testing
A few related researches under taken in this area are as under:
1. C Doungsa-ard et al. Member IEEE in their study “AI Based Framework for Automatic Test Data Generation” revealed that software testing is labour intensive. Gray- box testing is a combination of white – testing and black box testing. A frame work for automatically produced test data from gray- box method is proposed. The proposed framework is pluggable i.e. it can be used to generate many test generation approaches. First selected techniques for generating test data are randomize generation and genetic algorithm[39].
2. Cyrille Artho and Armin Biere in their study “Advanced Unit Testing – How to Scale Up a Unit Test Framework” stated that Unit testing is an effective way to find software fault. In JNuke project, automated regression tests combined to ensured high code quality. Automated support for log files made it possible to find the internal state of objects. These extensions allows unit test framework to scale up to large-scale tests [40].
3. Pulei Xiong, Robert L. Probert, in their study “A Multi- Approach Testing Methodology in TTCN for Web Applications” stated that testing Web Applications(WAs) is challenging because it involves many test tools. A multi-approach testing methodology for WAs by specifying Test Suites in TTCN-3 on abstract level. This approach increase the reusability of Test Cases(TCs), and increase the degree of automated testing. The goals of proposing a multi-approach testing methodology is to make a comprehensive testing architecture [41].
4. B. Hailpern and P. Santhanam in their study “Software Debugging, Testing, and Verification” revealed that in software organizations, higher customer expectation of quality have placed a major responsibility on area of software debugging, testing and verification [42]. There are exciting improvements in on all three fronts. Tools that incorporate the more advanced aspects of this technology are not ready for large-scale commercial use.
5. Bernhard Beckert and Christoph Gladisch in their study “White-Box Testing by Combining Deduction-Based Specification Extraction and Black- Box Testing” stated that use deductive program verification systems to generate specifications for given program and use specification as input for black-box testing tools which leads to program’s structure that contained in specification and give a clear interface between program analysis on one hand and test case generation on other hand, which allows the combination of a wide range of tools [43].
6. Smirnov Sergey in their study “Software Security DVD011 –Software Testing: Black- Box Techniques” stated that software is used in every educational, business and financial etc. organization. Therefore high quality software is needed, means software should be properly tested and verified before system- integration time. Several black-box methods were considered with their strengths and weaknesses [44]. Potential of automated black-box techniques for better performance in testing of reusable components was also studied.
7. Ian Sommerville in their study “Software Testing ” revealed that distinction between validation testing and defect testing, described the principle of system and component testing, discuss the strategies for generating system test cases and explained that essentials characteristics of tool used for test automation. Testing shows the presence of faults in the system; but can not prove that no remaining faults. Equivalence partitioning is a way of discovering test cases- all cases in a partition should behave in the same way [45]. Test automation reduces testing costs by supporting test process.
8. James A. Whittaker in the his study “Feature-What Is Software Testing ? And Why Is It So hard ?” stated that software testing is arguably the least understood part of the development process. Through a four- phase approach, it is shown that why eliminating bugs is tricky and why testing is a constant trade-off. Companies face serious challenges in testing their products, and these challenges are growing bigger as software grows more complex. Recognition of complexity of nature of testing and should be taken seriously [46].
9. Anton Michlmayr et al. in their study “Specification- Based Unit Testing of Publish/Subscribe Applications” stated that testing remains the verification methods for software systems. Behaviour of such systems is evaluated against their informal or formal specifications. Architecture- driven approach to software testing is considered. Difficulties in testing can be due to optimizing the test methodology to leverage the architecture of application under test [47]. Attention towards the design of our framework, and illustrate how to accomplish unit testing of publish/subscribe applications against LTL specifications is given.
10. Mark Last et al. in their study “Effective Functional Testing Wsith Genetic Algorithms” revealed that functional test cases are identified from functional requirement of the tested system, viewed as a mathematical function mapping its inputs onto its outputs. An effective set of test case is one that has a high probability of detecting faults presenting in a computer program [48]. A new, computational intelligent approach for generation of effective test cases based on a novel, Fuzzy- Based Age Extension of Genetic Algorithm (FAexGA) is used. Basic idea to eliminate “bad” test cases and increase “good” test cases that have a high probability of producing an erroneous output.
11. Gregory M. Kapfhammer in his study “Software Testing” stated that construction of testing program require a large intellectual efforts. Model of Execution Based Software Testing is explained [49]. Software testing is not a “Silver Bullet” that guarantee the production of high quality applications, and empirical investigations have shown that the rigorous, consistent and good application testing techniques can improve software quality. The stages of test case specification, test case generation, test execution, test adequacy and regression testing plays an important role in the production of program that meet their intended specification.
12. Ram Chillarege in his study “Software Testing Best Practices” revealed the lists of twenty-eight best practices contributes for the improved software testing. They may not necessarily relate to software test tools. The basic practices are functional specification, review and inspection, formal entry and exit criteria, functional test-variations, multi-platform testing, automated test execution, usability testing etc. is discussed [50].
13. Nicolas Mayer et al. in their study “Towards a Risk- Based” Security Requirements Engineering Framework” pointed out that Information System (IS), particularly e- business systems, are required to be more secure in order to resist to the increasing number of attacks. Security is no longer just a desirable quality of IT systems, but is required for compliance to international regulations. The Requirements Engineering (RE) community has started to make successful contributions in the domain of security engineering. This concerns the integration of RE techniques at the early stages of security engineering, as well as the iterative management of security requirements, due to the intertwining between requirements and software architecture design. The aim of this papers is show, that using and adapting an appropriate set of existing tools and techniques of risk analysis methods, improves the effectiveness of an iterative security engineering method starting at the earliest stage of IS development[52].
14. Premkumar T . Devandu and Stuart Stubblebine in their study “Software Engineering for security: a roadmap” presented the perspective on the research issues that arise in the interactions between software engineering and security. They stress on the fact that almost every software controlled system faces threats from potential adversaries, from Internet-aware client application running on PCs, to complex telecommunications and power systems accessible over the Internet, to commodity software with copy protection mechanism. Software engineers must be cognizant of these threats and engineer systems with credible defences, while still delivering values to customers [53].
15. Lynette Sparrow et al. in their study “The Slow Wheels of Requirements Engineering Research: Responding to the Challenge of Societal Change” revealed the state of RE research from 2001 to 2005. A taxonomy of RE literature is presented and a conceptual framework for understanding the current state of RE is also described. Their analysis shows that during the periods 2000 -2005 there was only an incremental development of RE research without any radical theoretical contributions to its body of knowledge. This paper also poses a challenge for the research commodity to respond to the dramatic change in the social and business world [54].
CHAPTER-3
RESEARCH METHODOLOY
3.1 Universe and Sampling Technique
Software testing is the process of checking software, to verify that it satisfies its requirements and to detect errors. Software testing is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test, with respect to the context in which it is intended to operate. This includes, but is not limited to, the process of executing a program or application with the intent of finding software bugs. Statistics reveals that the nearly 30-40% of the effort goes into testing- irrespective of the type of the project, whereas hardly any time is allocated for testing. Software testing is a highly complex activity- it is even difficult to say when testing is complete.
To meet out the objectives of the study a questionnaire was prepared. The questionnaire was tested and verified for its validity on a small sample. The necessary changes were in cooperated. The data was collected from the software development companies. In all 18 software development companies were selected on random bases. The lists of companies/Institutes the questionnaire has been distributed are as follows:
1) Cellebrum Technologies ltd., Mohali 2) Altruist, Shimla
3) Nokia, Revari 4) HCL, Noida
5) Pyro Networks, Hydrabad 6) knoahSoft, Hydrabad
7) Satyam , Noida 8) Accenture, Noida
9) Reliance(ADA), Mumbai 10) TCS, Noida
11) Avaya. Pune 12) ADP, Hydrabad
13) AMDOCS, Pune 14) Wipro, Delhi
15) Colt, Noida 16) RSystems, Noida
17) CSE, Delhi 18) Erickson, Mohali
In total 50 software tester participated in the study. The percentage for each item was calculated to reach the inference. The view point of the participant was tabulated and graphs were made to shows the results.
CHAPTER-4
COMPARISON OF TESTING TECHNIQUES
4.1 ANALYSIS
Black Box Testing:
Most of the software developers preferred Equivalence Class Partitioning techniques in Black Box Testing. It is evident from Figure 4.1 that 38% testers preferred Equivalence Class Partitioning, 22%, Boundary Values Analysis, 22%, Cause- Effect Graphing and 18%, Decision Table Based Testing. Hence, it may be conclude that most of the software testers viewed for Equivalence Class Partitioning techniques followed by Boundary Values Analysis and Cause- Effect Graphing.
A=Boundary Values Analysis B=Equivalence Class Partitioning C=Cause- Effect Graphing D=Decision Table Based
Figure 4.1: Black Box Testing Techniques
White Box Testing:
Most of the software developers preferred Data Flow Testing Techniques in White Box Testing Techniques. It can be depicted from Figure 4.2 that 76% tester preferred data flow testing, 22%, path testing and 2% mutation testing. Hence, it may be concluded that majority of the software testers viewed for Data Flow Testing.
A= Path Testing B=Data Flow Testing C=Mutation Testing
Figure 4. 2: White Box Testing Techniques
Efficiency of Testing Techniques:
The most of the software developer prefer Unit Testing techniques. It is evident from Figure 4.3 that 36% tester preferred unit testing, 30%, acceptance Testing, 24%, integration testing and 10%, system testing Hence it may be conclude that most of the software testers viewed for unit testing techniques followed by acceptance testing and integration testing.
Figure 4.3: Efficient Testing Techniques
Most of the software developer uses top down testing techniques. It is evident from figure 4.4 that 48% tester preferred Top-down testing, 36%, Bottom-Up, 10%, Big-bang and 6% hybrid testing as a integration Testing. Hence it may be concluded that most of the software testers viewed for top down testing followed by bottom-up testing.
Figure 4.4: Integration Testing Techniques
Most of the software developer gives the importance to the Correctness as testing factors. It is evident from figure 4.5 that 50% tester preferred correctness, 26%, Efficiency,14%, Reliability, 6% flexibility,4% Reusability. Hence, it may be concluded that most of the software testers viewed for correctness followed by efficiency.
Figure 4.5: Importance of Testing Factors
Most of the software developers ignore the testing factors as flexibility. It is evident from figure 4.6 that 36% testers ignored flexibility, 20% Reusability, 18% Reliability, 14% Correctness and 12% efficiency. Hence, it may be conclude that most of the software testers viewed for flexibility as ignoring factor followed by reusability.
Figure 4.6: Ignorance of Testing Factors
Most of the software developer likes the stability as characteristics of testing. It is evident from figure 4.7 that 48% tester preferred Stability, 16% Simplicity, 12% Robustness, 10% Observability, 8% Scalability, 4% Controllability and 2% Operability. Hence, it may be concluded that tester give priority to the stability.
Figure 4.7: Characteristics of Testing
Maximum reason for failure of software system is Design Fault. it is evident from Figure 4.8 that 40% tester preferred design fault, 34%, Software Fault, 24%, testing Fault and 2% environmental fault and none of them preferred documentation fault. Hence, it may be depicted that most software testers want to spend their time in Design Fault.
Figure 4.8: Failure of Software System
A= Testing Fault B=Software Fault C =Design Fault D= Environmental Fault E= Documentation Fault
It is evident from Figure 4.9 that 46% testers spent 20-30% of time on testing during the software development whereas 24% spent 10-20%, 18% spent 30-40%, 8% spent more than 40% of total time. There are 4% testers who spent less than 10% time on testing during the entire development process of the software.
A=Less than 10% B=10-20% C=20-30% D= 30-40% E= More than 40%
Figure 4.9: Time Devotion during Development of Software
From Figure 4.10, it is clear that 60% testers assumed that 30-40% testing phase effect the quality of software, 20% assumed more than 40%, 12% predicted 10-20% effect the quality of software, none of tester says that less than 10% effect the quality of software. Hence it has been concluded that testing phase effect the quality of software up to 30-40%.
A=Less than 10% B=10-20% C=20-30% D= 30-40% E= More than 40%
Figure 4.10: Effect of Testing on Quality
The views of testers were taken with regards to the efficiency of different software testing techniques on mainly five factors. These factors are Flexibility, Reliability, Reusability, Cost Effectiveness and Correctness. The analysis of the responses received from various testers on different testing techniques is presented in the form of bar charts from Figure 4.11 to Figure 4.25. Here, the tester were asked to give there responses on the efficiency on different software testing techniques on scale of 1-5, where 1 means ‘Most Inefficient’, 2 means ‘Inefficient’, 3 means ‘Average’, 4 means ‘Efficient’ and 5 means ‘Most Efficient’.
Boundary Values Analysis:
The efficiency of Boundary Values Analysis on different five factors is shown in Figure 4.11. 70% testers were of the view that Flexibility is efficient averagely. 48% testers opined for Reliability as efficient and 40% opined for average so far as efficiency is concerned. 58% preferred Reusability as efficient, where as most of the tester rated Cost Effectiveness as efficient to most efficient, and correctness is efficient.
Figure 4.11: Boundary Value Analysis
Equivalence Class Partitioning:
The efficiency of Equivalence Class Partitioning on different five factors is shown in figure 4.12. 74% testers preferred flexibility as averaged, 58% testers opined reliability as averaged-efficient, 58% testers rated reusability as averaged and, 54% testers rated cost effectiveness as efficient and 25% as averaged, and correctness is more efficient.
Figure 4.12: Equivalence Class Partitioning
Cause-Effect Graphing Techniques:
The efficiency of Cause -Effect Graphing Techniques on different five factors is shown in figure 4.13. 50% testers preferred flexibility as efficient and 30% as average, 70% testers opined reliability as efficient, 46% testers rated reusability as average and 34% as more efficient, 70% testers rated cost effectiveness as efficient and 16% as averaged, and correctness is efficient.
Figure 4.13: Cause-Effect Graphing Techniques
Decision Table Based testing Techniques:
The efficiency of Decision Table Based Testing techniques on different five factors is shown in figure 4.14. 62% testers preferred flexibility as average and 28% as efficient, 46% testers opined reliability as efficient and 32% as average, 62% testers rated reusability as average and 28% as efficient, 56% testers rated cost effectiveness as efficient and 24% as averaged, and 52% tester opined correctness as average.
Figure 4.14: Decision Table Based Testing.
Path Testing:
The efficiency of Path Testing Techniques on different five factors is shown in figure 4.15. 62% testers preferred flexibility as efficient and 32% as average, 50% testers opined reliability as efficient and 45% as average, 58% testers rated reusability as efficient and 32% as average, 60% testers rated cost effectiveness as efficient and 30% as averaged, and 50% tester opined correctness as efficient and 30% as average.
Figure 4.15: Path Testing
Data Flow Testing:
The efficiency of Data Flow Testing Techniques on different five factors is shown in figure 4.16. 62% testers preferred flexibility as efficient and 32% as average, 56% testers opined reliability as efficient and 30% as average, 56% testers rated reusability as average and 38% as efficient, 50% testers rated cost effectiveness as inefficient and 48% as averaged, and 52% tester opined correctness as efficient and 26% as average.
Figure 4.16: Data Flow Testing
Mutation Testing:
The efficiency of Mutation Testing Techniques on different five factors is shown in figure 4.17. 50% testers preferred flexibility as efficient and 35% as average, 52% testers opined reliability as efficient and 32% as average, 48% testers rated reusability as efficient and 32% as average, 58% testers rated cost effectiveness as efficient and 25% as averaged, and 60% tester opined correctness as average.
Figure 4.17: Mutation Testing.
Unit Testing:
The efficiency of Unit Testing Techniques on different five factors is shown in figure 4.18. 50% testers preferred flexibility as average and 35% as efficient, 55% testers opined reliability as efficient and 30% as average, 52% testers rated reusability as average and 48% as efficient, 62% testers rated cost effectiveness as average and 32% as efficient, and 58% tester opined correctness as more efficient.
Figure 4.18: Unit Testing
Integration Testing:
The efficiency of Integration Testing Techniques on different five factors is shown in figure 4.19. 62% testers preferred flexibility as efficient and 30% as average, 50% testers opined reliability as average and 38% as efficient, 50% testers rated reusability as average and 42% as efficient, 52% testers rated cost effectiveness as efficient and 35% as averaged, and 72% tester opined correctness as efficient.
Figure 4.19: Integration Testing
System Testing
The efficiency of System Testing Techniques on different five factors is shown in figure 4.20. 36% testers preferred flexibility as average and more efficient, 46% testers opined reliability as average and 42% as efficient, 52% testers rated reusability as more efficient and 36% as average, 74% testers rated cost effectiveness as more efficient and 36% as averaged, and 62% tester opined correctness as average.
Figure 4.20: System Testing
Acceptance Testing:
The efficiency of Acceptance Testing Techniques on different five factors is shown in figure 4.21. 35% testers preferred flexibility as average and more efficient, 45% testers opined reliability as average and 42% as efficient, 52% testers rated reusability as efficient and 35% as average, 75% testers rated cost effectiveness as efficient and 62% tester opined correctness as average.
Figure 4.21: Acceptance Testing
Big- Bang integration Testing:
The efficiency of Big Bang Integration Testing Techniques on different five factors is shown in figure 4.22. 46% testers preferred flexibility as efficient and 50% as average, 48% testers opined reliability as inefficient and average, 62% testers rated reusability as inefficient and 30% as average, 72% testers rated cost effectiveness as efficient and 22% as averaged, and 38% tester opined correctness as more efficient and 30% as efficient.
Figure 4.22: Big- Bang integration Testing
Top -Down integration Testing:
The efficiency of Top down Integration Techniques on different five factors is shown in figure 4.23. 70% testers preferred flexibility as efficient and 28% as average, 50% testers opined reliability as average and 30% as efficient, 42% testers rated reusability as average and 38% as efficient, 68% testers rated cost effectiveness as efficient and 24% as averaged, and 52% tester opined correctness as more efficient and 32% as efficient.
Figure 4.23: Top -Down integration Testing
Bottom up Integration Testing:
The efficiency of Bottom up Integration Testing Techniques on different five factors is shown in figure 4.24. 62% testers preferred flexibility as efficient and 28% as average, 58% testers opined reliability as efficient and 22% as more efficient, 50% testers rated reusability as average and 42% as efficient, 72% testers rated cost effectiveness as efficient and 24% as averaged, and 60% tester opined correctness as average.
Figure 4.24: Bottom up Integration Testing
Hybrid Integration Testing:
The efficiency of Hybrid Testing Techniques on different five factors is shown in figure 4.25. 56% testers preferred flexibility as average and 28% as efficient, 66% testers opined reliability as efficient and 24% as average, 46% testers rated reusability as efficient and 30% as average, 52% testers rated cost effectiveness as efficient and 20% as averaged, and 52% tester opined correctness as 26% as average.
Figure 4.25: Hybrid Integration Testing
CHAPTER-5
SUMMARY AND CONCLUSION
Summary And Conclusion
ring the software testing phase of software development process, the faults remained from all the previous phases are detected. Hence, testing performs a very critical role for quality assurance and for ensuring the reliability of software.
During Testing, the program or software to be tested is executed with a set of test data, and the output of the program for the test cases is evaluated to determine if the program has performed as expected. The dynamic testing can only ascertain the presence of the errors in the program; the exact nature of the errors is not usually decided by testing. Testing forms the first step in determining the errors in a program. Effective software testing will contribute to the delivery of higher quality software products, more satisfied users, lower maintenance costs, more accurate, and reliable results.
The survey of literature reveals that a few attempts have been made in this direction for effective testing and comparison of different existing techniques. There exists a gap in research in this area. Keeping in view this gap in research and its importance the following problem has been undertaken for the purpose of this study.
“SOFTWARE TESTING TECHNIQUES: A COMPARATIVE STUDY”
The specific objectives of the study were as follows:
• Comparative study of different Software testing techniques.
• Effectiveness and supports from different testing techniques.
To study the various software testing techniques a questionnaire has been prepared to meet out the objectives of the study. The data was collected through this questionnaire from 18 different software companies.
Software testing has been categorized on the basis of knowledge of system, time, purpose of testing and the execution. In the black box testing system is treated as close system and test engineer do not assume any thing about how system was created. As a test engineer if anyone performing black box test data, one thing that everyone needs to make sure is that do not make any assumptions about the system based on the knowledge. White box testing has been observed a very different in nature from black box testing. In black box testing, focus of all the activities is only on the functionality of system and not on what is happening inside the system. Gray box testing technique has been observed as a combination of black box and white box testing. Intention of this testing is to find out defects related to bad design or bad implementation of the system. In Gray box testing, test engineer has been equipped with the knowledge of system and designs test cases or test data based on system knowledge. Unit testing is the process of taking a module and running it in isolation from the rest of the software product by using prepared test cases and comparing the actual results predicted by the specifications and design of the module. Objective of integration testing is to make sure that the interaction of two or more components produces results that satisfy functional requirement. System testing phase is started after the completion of other phases like Unit, Component and Integration testing. During the system phases, non functional testing also comes in to picture and performance, load, stress, scalability all these types of testing are performed in this phase. User acceptance testing has been different from System Testing. System Testing is invariably performed by the development team which includes developer and tester. User acceptance testing on the other hand should be carried out by the end user. Regression testing is done to ensure that enhancement, defect fixes or any other changes made to the software have not broken any existing functionality.
From the analysis it has been concluded that during black box testing technique the equivalence class partitioning technique is preferred more than the other black box testing techniques likes boundary values analysis, cause-effect Graphing techniques. Whereas for white box testing techniques it has been observed that data flow testing technique is more appropriate than the other white box testing techniques like path testing and mutation testing techniques. Out of Unit Testing, Integration Testing, System Testing and Acceptance Testing, unit testing is preferred most. In Integration testing techniques, top-down integration testing is more appropriate than the other integration testing techniques like big-bang integration, bottom-up integration testing. The important software testing factor is correctness out of other factors likes flexibility, reliability, cost effectiveness and reusability. As a study for the testing factor which is vital but rarely attainable, stability seems to be vital but rarely attainable than the other testing factors like correctness, reliability, efficiency and reusability. In testing characteristics, stability is more important than the other characteristics like operability, observability, robustness, scalability and simplicity. The reason of failure of software system, design fault found to be more contributing factor for the failure of software system. Most of the testers devote 20-30% of the total time of development on testing.
In equivalence class partitioning flexibility, reliability, reusability, Cost effectiveness and correctness are averagely efficient. In Cause- effect graphing techniques Reliability and cost effectiveness is efficient-average, reusability is average-most efficient, Cost effectiveness is efficient and correctness is average-efficient. In decision table based testing flexibility, reusability, correctness are average-efficient. In path testing techniques flexibility, reliability, reusability, cost effectiveness, correctness is efficient-average. In data flow testing techniques reliability, reusability, cost effectiveness, correctness are average-efficient.
In mutation testing techniques flexibility, reliability, reusability, cost effectiveness and correctness are efficient-average. In unit testing techniques, correctness is more efficient. In integration testing techniques reusability and cost effectiveness is most efficient. In acceptance testing techniques, cost effectiveness and correctness are efficient.
In big bang integration testing, cost effectiveness is efficient. In top down testing techniques, correctness is most efficient. In bottom up testing techniques flexibility and reliability and cost- effectiveness are efficient-average. In hybrid testing techniques, reliability and reusability and correctness are efficient-average, cost effectiveness is efficient. To produce the good quality software, requirement analysis and specification phase is more preferred than design phase, implementation and unit testing phase, integration and system testing phase.
Hence we conclude that equivalence class partitioning techniques, dataflow testing techniques, unit testing techniques; top-down integration testing techniques are mostly used by tester.
BIBLIOGRAPHY
[1] Myers, G., The Art of Software Testing, Wiley, 1979.
[2] B.Beizer, “software testing Techniques”, Van Nostrand Reinhold, New York, 1990.
[3] IEEE, Software Engineer Standards. IEEE Press, 1987.
[4] www.testinggeek.com/testingtype.asp.
[5] P.C. Jorgensen, “Software Testing: A craftsman’s approach”, CRC Press, USA, 1995.
[6] www.testinggeek.com/blackbox.asp
[7] www.testinggeek.com/boundary.asp
[8] www.testinggeek.com/equivalence.asp
[9] G.Mayer. “The art of software testing” Wiley- Interscience, New York, 1979.
[10] P.Jalote, “An integrated approach to software engineering”, Narosa, Delhi, 1996.
[11] www.testinggeek.com/whitebox.asp
[12] W.R. Elmendorf, “Cause-effect graphs in functional testing”, Poughkeepsie, NY; IBM System development Division TR-00. 2487,1973.
[13] E.F.Miller, “Tutorial: Program Testing Techniques”, COMPSAC’ 77, IEEE Computer Society, 1977.
[14] A.Bertolino & M.Marre, “Automatic generation of path covers based on the control flow analysis of computer program”, IEEE Trans on Software Engineering, Vol. 20, No. 2, Dec. 1994.
[15] T.J. McCabe, “ A complexity Metric”, IEEE Transactions on software Engineering, SE-2,4, 308-320, December, 1976.
[16] M.A.Friedman & M. Voas Jeffrey, “Software Assessment”, John Wiley & Sons, 1995.
[17] www.testinggeek.com/graybox.asp
[18] www.testinggeek.com/unit.asp
[19] www.testinggeek.com/integration.asp
[20] G.W.Jones, “Software Engineering”, John Wiley &Sons,1990.
[21] R.S.Pressman, “Software Engineering: A practitioner’s approach” Mc Graw Hill, New York, 1997
[22] www.testinggeek.com/bigbang.asp
[23] www.testinggeek.com/topdown.asp
[24] www.testinggeek.com/bottomup.asp
[25] www.testinggeek.com/hybrid.asp
[26] www.testinggeek.com/system.asp
[27] www.testinggeek.com/uat.asp
[28] www.testinggeek.com/regression.asp
[29] www.testinggeek.com/security.asp
[30] www.testinggeek.com/secnetwrktesting.asp
[31] J.S. Collofello, “Introduction to software Verification and validation”, SEI-CM-13-1.1, Software Engineering Institute, Pittsburgh, P.A., U.S.A.
[32] www.testinggeek.com/secpasswdcrckng.asp
[33] www.testinggeek.com/secfileintegrity.asp
[34] www.testinggeek.com/secvirusdetector.asp
[35] www.testinggeek.com/secwardialling.asp
[36] www.testinggeek.com/secpenetration.asp
[37] www.buzzle.com/editorial/4-10-2005-68349.asp
[38] www.amazon.com/Software-Testing-Techniques-Boris-Beizer
[39] C Doungsa-ard, K Dahal, and A Hossian, AI Based Framework for Automatic Test Data Generation
http://www.google.co.in/search?hl=en&q=%5D+C+Doungsa-ard%2C+K+Dahal%2C+and+A+Hossian%2C+AI+Based+Framework+for+Automatic+Test+Data+Generation&meta=
[40] Cyrille Artho, Armin Biere, Advanced Unit Testing: How to Scale up a Unit Test Framework
http://www.google.co.in/search?hl=en&q=Cyrille+Artho%2C+Armin+Biere%2C+Advanced+Unit+Testing%3A+How+to+Scale+up+a+Unit+Test+Framework&meta=
[41] Pulei Xiong, Robert L.Probert, A Multi-Approach Testing Methodology in TTCN for Web Applications
http://www.google.co.in/search?hl=en&q=Pulei+Xiong%2C+Robert+L.Probert%2C+A+Multi-approach+Testing+Methodology+in+TTCN+for+Web+Applications&meta=
[42] B.Hailpern, P.Santhanam, Software debugging, testing, and verification
http://www.google.co.in/search?hl=en&q=B.Hailpern%2C+P.Santhanam%2C+Software+debugging%2C+testing%2C+and+verification&meta=
[43] Bernhard bcckert and Christoph Gladisch, White- box Testing by combining Deduction- based Specification Extraction and Black- box Testing
http://www.google.co.in/search?hl=en&q=Bernhard+bcckert+and+Christoph+Gladisch%2C+White-+box+Testing+by+combining+Deduction-+based+Specification+Extraction+and+Black-+box+Testing&btnG=Search&meta=
[44] Smirnov sergey, Software security DVD011: Software Testing: Black- box techniques
http://www.google.co.in/search?hl=en&q=Smirnov+sergey%2C+Software+security+DVD011%3A+Software+Testing%3A+Black-+box+techniques&btnG=Search&meta=
[45] Ian Sommerville 2004, Software testing
http://www.google.co.in/search?hl=en&q=+Ian+Sommerville+2004%2C+Software+testing&meta=
[46] James A. Whittaker, What Is Software Testing? And Why Is It So Hard?
http://www.google.co.in/search?hl=en&q=James+A.+Whittaker%2C+What+Is+Software+Testing%3F+And+Why+Is+It+So+Hard%3F&meta=
[47] Anton Michlmayr, Pascal Fenkam, Scharhram Dustdar, Specification- Based Unit Testing of Publish/Subscribe Applications.
http://www.google.co.in/search?hl=en&q=Anton+Michlmayr%2C+Pascal+Fenkam%2C+Scharhram+Dustdar%2C+Specification-+Based+Unit+Testing+of+Publish%2FSubscribe+Applications.&meta=
[48] Mark Last, Shay Eyal, and Abraham kandel, Effective Functional testing with Genetic Algorithms
http://www.google.co.in/search?hl=en&q=Mark+Last%2C+Shay+Eyal%2C+and+Abraham+kandel%2C+Effective+Functional+testing+with+Genetic+Algorithms&meta=
[49] Gregory M. Kapfhammer ,Software Testing
http://www.google.co.in/search?hl=en&q=Gregory+M.+Kapfhammer+%2CSoftware+Testing&meta=
[50] Ram Chillarege, Software Testing Best Practices
http://www.google.co.in/search?hl=en&q=Ram+Chillarege%2C+Software+Testing+Best+Practices&meta=
[51] J.S. Collofello, “Introduction to software Verification and validation”, SEI-CM-13-1.1, Software Engineering Institute, Pittsburgh, P.A., U.S.A.
http://www.google.co.in/search?hl=en&q=J.S.+Collofello%2C+%E2%80%9CIntroduction+to+software+Verification+and+validation%E2%80%9D%2C+SEI-CM-13-1.1%2C+Software+Engineering+Institute%2C+Pittsburgh%2C+P.A.%2C+U.S.A.&meta=
[52] Nicolas Mayer "Towards a Risk- Based” Security Requirements Engineering Framework"
http://www.google.co.in/search?hl=en&q=Nicolas+Mayer+%22Towards+a+Risk-+Based%E2%80%9D+Security+Requirements+Engineering+Framework%22&meta=
[53] Premkumar T. Devandu and Stuart Stubblebine "Software Engineering for security: a roadmap"
http://www.google.co.in/search?hl=en&q=Premkumar+T+.+Devandu+and+Stuart+Stubblebine+%22Software+Engineering+for+security%3A+a+roadmap%22&meta=
[54] Lynette Sparrow "The Slow Wheels of Requirements Engineering Research: Responding to the Challenge of Societal Change"
http://www.google.co.in/search?hl=en&q=Lynette+Sparrow+%22The+Slow+Wheels+of+Requirements+Engineering+Research%3A+Responding+to+the+Challenge+of+Societal+Change%22&meta=
APPENDIX
A Software Testing Techniques Questionnaire
Q.1 Personal Information
(a) Department:
(b) Total Experience:……years.
(c) Experience in Present Organization: ……..years
(d) Experience in Presence Department: ……..years
Q.2 Which techniques you prefer the most in data selection of black box testing (Please tick the appropriate one)
a) Boundary Values Analysis (b) Equivalence Class Partitioning
c) Cause-Effect Graphing. (d) Decision Table Based Testing
Q.3 Which techniques you prefer the most in data selection of white box testing (Please tick the appropriate one)
a) Path Testing. b) Data Flow Testing.
c) Mutation Testing.
Q.4 Which is the most efficient testing Techniques (give the precedence by giving no. from 1 to 4 in the bracket given in front of options)
a) Unit Testing ( ) b) Integration Testing ( )
c) System Testing ( ) d) Acceptance Testing ( )
Q.5 Which integration testing you prefers the most (Please tick the appropriate one)
a) Big Bang Integration Testing ( )
b) Top down Integration Testing ( )
c) Bottom up Integration Testing ( )
d) Hybrid Integration Testing ( )
Q.6 Which is the most important Software Testing Factor (give the precedence by giving no. from 1 to 5 in the bracket given in front of options)
a) Flexibility ( ) b) Reliability ( )
c) Reusability ( ) d) Efficiency ( )
e) Correctness ( )
Q.7 Which testing factor is worth ignoring (give the precedence by giving no. from 1 to 5 in the bracket in front of options)
a) Flexibility ( ) b) Reliability ( )
c) Reusability ( ) d) Efficiency ( )
e) Correctness ( )
Q.8 Which Testing Factor do you think is vital but is rarely attainable (give the precedence by giving no. from 1 to 5 in the bracket in front of options)
a) Flexibility ( ) b) Reliability ( )
c) Reusability ( ) d) Efficiency ( )
e) Correctness ( )
Q.9 Which characteristics of testing you like the most (give the precedence by giving no. from 1 to 7 in the bracket in front of options)
a) Operability ( ) b) Observability ( )
c) Robustness ( ) d) Scalability ( )
e) Controllability ( ) g) Simplicity ( )
h) Stability ( )
Q.10 Which of the following could be a reason for failure of software system (Please tick the appropriate one)
1) Testing fault. 2) Software Fault.
3) Design fault. 4) Environmental fault.
5) Documentation fault.
Q.11 What percentage of time is devoted on testing during the development a software by you (Please tick the appropriate one).
a) Less than 10% b) 10-20%
c) 20-30% d) 30-40%
e) More than 40%
Q.12 Up to what extend the testing phase effect the quality of software (%age wise) (Please tick the appropriate one).
a) Less than 10% b) 10-20%
c) 20-30% d) 30-40%
e) More than 40%
Q.13 Kindly comment on the efficiency of the testing techniques on the following Scale (give number from 1 -5, taking 1 as the lowest Score)
1 2 3 4 5
Most
Inefficient Inefficient Average Efficient Most
Efficient
Testing techniques
Flexibility
Reliability
Reusability Cost
Effective
Correctness
Boundary Values Analysis
Equivalence Class Partitioning
Cause-Effect Graphing
Decision Table Based Testing
Q.14 Kindly comment on the efficiency of testing techniques on the following Scale (give number from 1 -5, taking 1 as the lowest Score)
1 2 3 4 5
Most
Inefficient Inefficient Average Efficient Most
Efficient
Testing techniques
Flexibility
Reliability
Reusability
Cost Effective
Correctness
Path Testing
Data Flow Testing
Mutation Testing
Q.15 Kindly comment on the efficiency of the testing techniques on the following Scale (give number from 1 -5, taking 1 as the lowest Score)
1 2 3 4 5
Most
Inefficient Inefficient Average Efficient Most
Efficient
Testing Techniques
Flexibility
Reliability
Reusability
Cost Effective
Correctness
Unit Testing
Integration Testing
System Testing
Acceptance Testing
Q. 16 Kindly comment on the efficiency of the mode on the following Scale (give number from 1 -5, taking 1 as the lowest Score)
1 2 3 4 5
Most
Inefficient Inefficient Average Efficient Most
Efficient
Testing Techniques
Flexibility
Reliability
Reusability Cost
Effective
Correctness
Big Bang Integration Testing
Top Down Integration Testing
Bottom up Integration Testing
Hybrid Integration Testing
Q.17 Which of the following statement shows that software testing process can never be complete (please tick the appropriate one)
a) We can never be certain that the program is bug free.
b) We have no definite stopping point for testing, which makes it easier for some managers to argue for very little testing.
c) We have no easy answer for what testing tasks should always be required, because every task takes time that could be spent on other high importance tasks.
Q.18 In testing project the best testing strategy definition is (Please tick appropriate one)
a) The plan for applying resources and selecting techniques to assure quality.
b) The guiding plan for finding bugs.
Q.19 Which phase of the Software Life Cycle is most important for producing good quality software (give the precedence by giving no. from 1to 5 in the bracket in front of options).
a) Requirement Analysis and Specification phase ( )
b) Design Phase ( )
c) Implementation and unit testing phase. ( )
d) Integration and System testing phase. ( )
e) Operation and Maintenance Phase. ( )
Q.20 Which significant purpose is served the most by Regression Testing (give the precedence by giving no. from 1 to 4 in the bracket given in front of options)
a) Locate errors in the modified program. ( )
b) Increase confidence in the correctness of the modified program. ( )
c) Preserve the quality and reliability of software. ( )
d) Ensure the software’s continued operation. ( )
e) Used to create test suites and test plans. ( )
Your name please (Optional)