Following are a brief list of terms commonly used in testing.
Documentation specifying the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, responsibilities, required, resources, and any risks requiring contingency planning. See: test design, validation protocol.
Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.
A matrix that records the relationship between two or more products; e.g., a matrix that recordsthe relationship between the requirements and the design of a given software component. See: traceability, traceability analysis.
An activity in which a system or component is executed under specified conditions, the resultsare observed or recorded and an evaluation is made of some aspect of the system or component.
A fault in a program which causes the program to perform in an unintended or unanticipated manner.
Testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system.
Acceptance testing performed by the customer in a controlled environment at the developer's site. The software is used by the customer in a setting approximating the target environment with the developer observing and recording errors and usage problems.
Boundary Value Analysis
A selection technique in which test data are chosen to lie along "boundaries" of the input domain [or output range] classes, data structures, procedure parameters, etc. Choices often include maximum, minimum, and trivial values or parameters.
Acceptance testing performed by the customer in a live application of the software, at one or more end user sites, in an environment not controlled by the developer.
The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.
A meeting at which software code is presented to project personnel, managers, users, customers, or other interested parties for comment or approval.
A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
The sudden and complete failure of a computer system or component.
The degree of impact that a requirement, module, error, fault, failure, or other item has on thedevelopment or operation of a system.
Executing the program with all possible combinations of values for program variables. This typeof testing is feasible only for small, simple programs.
Testing that ignores the internal mechanism or structure of a system or component and focuseson the outputs generated in response to selected inputs and execution conditions. (2) Testingconducted to evaluate the compliance of a system or component with specified functional requirements and corresponding predicted results.
An orderly progression of testing in which software elements, hardware elements, or both are combined and tested, to evaluate their interactions, until the entire system has been integrated.
Testing conducted to evaluate whether systems or components pass data and control correctly to one another.
Functional testing conducted to evaluate the compliance of a system or component with specifiedperformance requirements.
1.The planned systematic activities necessary to ensure that a component, module, or system conforms to established technical requirements.
2.All actions that are taken to ensure that a development organization delivers products that meet performance requirements and adhere to standards and procedures.
The operational techniques and procedures used to achieve quality requirements.
Rerunning test cases which a program has previously executed correctly in order to detect errors spawned by changes or corrections made during software development and maintenance.
A process or meeting during which a work product or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include code review, design review, formal qualification review, requirements review, test readiness review.
A measure of the probability and severity of undesired effects.
Testing conducted to evaluate a system or component at or beyond the limits of its specifiedrequirements.
Testing that takes into account the internal mechanism [structure] of a system or component.Types include branch testing, path testing, statement testing. (2) Testing to insure each programstatement is made to execute during testing and that each program statement performs itsintended function.
The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. Such testing may be conducted in both the development environment and the target environment.
A chronological record of all relevant details about the execution of a test.
A document describing the conduct and results of the testing carried out for a system or system component.
Testing of a module for typographic, syntactic, and logical errors, for correct implementation of its design, and for satisfaction of its requirements (or) Testing conducted to verify the implementation of the design for one software element; e.g., a unit or module; or a collection of software elements.
Tests designed to evaluate the machine/user interface.
Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes.
Validation, Verification and Testing
Used as an entity to define a procedure of review, analysis, and testing throughout the software life cycle to discover errors, determine functionality, and ensure the production of quality software.
Testing designed to challenge a system's ability to manage the maximum amount of data over a period of time. This type of testing also evaluates a system's ability to handle overload situations in an orderly fashion.