Glossary Search

Software Testing Glossary E
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

efficiency: The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions. [ISO 9126]

efficiency testing: The process of testing to determine the efficiency of a software product.

elementary comparison testing: A black box test design technique in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]

emulator
: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. [IEEE 610] See also simulator.

entry criteria
: The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria. [Gilb and Graham]

entry point
: The first executable statement within a component.

equivalence class
: See equivalence partition.

equivalence partition
: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.

equivalence partition coverage
: The percentage of equivalence partitions that have been exercised by a test suite.

equivalence partitioning
: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.

error
: A human action that produces an incorrect result. [After IEEE 610]

error guessing
: A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

error seeding
: See fault seeding.error seeding tool: See fault seeding tool.

error tolerance
: The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].

evaluation
: See testing.

exception handling
: Behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.

executable statement
: A statement which, when compiled, is translated into object code, and which will be executed procedurally when the program is running and may perform an action on data.

exercised
: A program element is said to be exercised by a test case when the input value causes the execution of that element, such as a statement, decision, or other structural element.

exhaustive testing
: A test approach in which the test suite comprises all combinations of input values and preconditions.

exit criteria
: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing. [After Gilb and Graham]

exit point
: The last executable statement within a component.

expected outcome
: See expected result.

expected result
: The behavior predicted by the specification, or another source, of the component or system under specified conditions.

experienced-based technique
: See experienced-based test design technique.

experienced-based test design technique
: Procedure to derive and/or select test cases based on the tester’s experience, knowledge and intuition.

exploratory testing
: An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [After Bach]