A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
pair programming: A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.
pair testing: Two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, working together to find defects. Typically, they share one computer and trade control of it while testing.
pairwise testing: A black box test design technique in which test cases are designed to execute all possbile discrete combinations of each pair of input parameters. See also orthogonal array testing.
partition testing: See equivalence partitioning. [Beizer]
pass: A test is deemed to pass if its actual result matches its expected result.
pass/fail criteria: Decision rules used to determine whether a test item (function) or feature has passed or failed a test. [IEEE 829]
path: A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.
path coverage: The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.
path sensitizing: Choosing a set of input values to force the execution of a given path.
path testing: A white box test design technique in which test cases are designed to execute paths.
peer review: A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.
performance: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. [After IEEE 610] See also efficiency.
performance indicator: A high level metric of effectiveness and/or efficiency used to guide and control progressive development, e.g. lead-time slip for software development. [CMMI]
performance profiling: Definition of user profiles in performance, load and/or stress testing. Profiles should reflect anticipated or actual usage based on an operational profile of a component or system, and hence the expected workload. See also load profile, operational profile.
performance testing: The process of testing to determine the performance of a software product. See also efficiency testing.
performance testing tool: A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.
phase test plan: A test plan that typically addresses one test phase. See also test plan.
pointer: A data item that specifies the location of another data item; for example, a data item that specifies the address of the next employee record to be processed. [IEEE 610]
portability: The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]
portability testing: The process of testing to determine the portability of a software product.
postcondition: Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.
post-execution comparison: Comparison of actual and expected results, performed after the software has finished running.
precondition: Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.
predicted outcome: See expected result.
pretest: See intake test.
priority: The level of (business) importance assigned to an item, e.g. defect.
procedure testing: Testing aimed at ensuring that the component or system can operate in conjunction with new or existing users’ business procedures or operational procedures.
probe effect: The effect on the component or system by the measurement instrument when the component or system is being measured, e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being used.
problem: See defect.
problem management: See defect management.
problem report: See defect report.
process: A set of interrelated activities, which transform inputs into outputs. [ISO 12207]
process cycle test: A black box test design technique in which test cases are designed to execute business procedures and processes. [TMap] See also procedure testing.
process improvement: A program of activities designed to improve the performance and maturity of the organization’s processes, and the result of such a program. [CMMI]
production acceptance testing: See operational acceptance testing.
product risk: A risk directly related to the test object. See also risk.
project: A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources. [ISO 9000]
project risk: A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc. See also risk.
program instrumenter: See instrumenter.
program testing: See component testing.
project test plan: See master test plan.
pseudo-random: A series which appears to be random but is in fact generated according to some prearranged sequence.