Saturday, June 19, 2010

Few Important Glossary

Acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

Adaptability: The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered.

Audit: An independent evaluation of software products or processes to ascertain compliance to  standards,  guidelines,  specifications,  and/or  procedures  based  on  objective  criteria, including documents that specify:
(1) the form or content of the products to be produced
(2) the process by which the products shall be produced
(3) how compliance to standards or guidelines shall be measured.


Audit trail: A path by which the original input to a process (e.g. data) can be traced back through the  process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out.

Bespoke software: Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.

Beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the  user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.

Black-box test design technique: Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
 
Blocked test case: A test case that cannot be executed because the preconditions for its execution are not fulfilled.

Bottom-up testing: An incremental approach to integration testing where the lowest level components  are  tested  first,  and  then  used  to  facilitate  the  testing  of  higher  level components. This process is repeated until the component at the top of the hierarchy is tested. See also integration testing.

Boundary value: An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

Boundary value analysis: A black box test design technique in which test cases are designed based on boundary values. See also boundary value.


Boundary value coverage: The percentage of boundary values that have been exercised by a test suite.

Business process-based testing: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

Capability Maturity Model (CMM): A five level staged framework that describes the key elements of an  effective software process. The Capability Maturity Model covers best- practices for planning, engineering and managing software development and maintenance

Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an  effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning, engineering and managing product development and maintenance

Concurrency testing: Testing to determine how the occurrence of two or more activities within  the  same  interval  of  time,  achieved  either  by  interleaving  the  activities  or  by simultaneous execution, is handled by the component or system.

Condition: A logical expression that can be evaluated as True or False, e.g. A>B. See also test condition.

Condition coverage: The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.

Condition determination coverage: The percentage of all single condition outcomes that independently  affect a decision outcome that have been exercised by a test case suite.100% condition determination coverage implies 100% decision condition coverage.

Condition determination testing: A white box test design technique in which test cases are designed  to  execute  single  condition  outcomes  that  independently  affect  a  decision outcome.

Condition testing: A white box test design technique in which test cases are designed to execute condition outcomes.

Condition outcome: The evaluation of a condition to True or False.

Configuration management: A discipline applying technical and administrative direction and surveillance to:  identify and document the functional and physical characteristics of a configuration  item,  control  changes  to  those  characteristics,  record  and  report  change processing and implementation status, and verify compliance with specified requirements.

Configuration management tool: A tool that provides support for the identification and control of configuration items, their status over changes and versions, and the release of baselines consisting of configuration items

Conversion testing: Testing of software used to convert data from existing systems for use in replacement systems

Cost of quality: The total costs incurred on quality activities and issues and often split into prevention costs, appraisal costs, internal failure costs and external failure costs

COTS: Acronym for Commercial Off-The-Shelf software

Coverage: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.

Coverage analysis: Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so, which test cases are needed


Cyclomatic complexity: The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where
- L = the number of edges/links in a graph
- N = the number of nodes in a graph
- P = the number of disconnected parts of the graph (e.g. a called graph and a subroutine) [After McCabe]


Data driven testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to  support the application of test execution tools such as capture/playback tools.

Data flow testing: A white box test design technique in which test cases are designed to execute definition and use pairs of variables

Decision: A program point at which the control flow has two or more alternative routes. A
node with two or more links to separate branches.

Decision  condition  coverage:  The  percentage  of  all  condition  outcomes  and  decision outcomes that  have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.

Decision condition  testing:  A  white  box  test  design  technique  in  which  test  cases  are designed to execute condition outcomes and decision outcomes.

Decision coverage: The percentage of decision outcomes that have been exercised by a test suite. 100%  decision coverage implies both 100% branch coverage and 100% statement coverage.

Decision outcome: The result of a decision (which therefore determines the branches to be taken).

Decision table: A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.

Decision table testing: A black box test design technique in which test cases are designed to execute the  combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal] See also decision table.

Decision testing: A white box test design technique in which test cases are designed to execute decision outcomes.

Defect: A flaw in a component or system that can cause the component or system to fail to perform its  required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

Defect based test design technique: A procedure to derive and/or select test cases targeted at one or more defect categories, with tests being developed from what is known about the specific defect category. See also defect taxonomy.

Defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-of- code, number of classes or function points).

Defect Detection Percentage (DDP): The number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards.

Defect management: The process of recognizing, investigating, taking action and disposing of defects. It  involves recording defects, classifying them and identifying the impact. [After IEEE 1044]


Defect management tool: A tool that facilitates the recording and status tracking of defects and  changes.  They  often  have  workflow-oriented  facilities  to  track  and  control  the allocation, correction and  re-testing of defects and provide reporting facilities. See also incident management tool.

Defect masking: An occurrence in which one defect prevents the detection of another. [After
IEEE 610]

Defect report: A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]

Defect taxonomy: A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects


Desk checking: Testing of software or specification by manual simulation of its execution.

Development testing: Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers. [After IEEE
610]

Domain: The set from which valid input and/or output values can be selected.

Driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]

Dynamic analysis:  The  process  of  evaluating  behavior,  e.g.  memory  performance,  CPU
usage, of a system or component during execution. [After IEEE 610]


Elementary comparison testing: A black box test design technique in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]

Entry criteria: The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria. [Gilb and Graham]

Entry point: The first executable statement within a component

Equivalence partition: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.

Equivalence partition coverage: The percentage of equivalence partitions that have been exercised by a test suite.

Equivalence partitioning: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.

Error: A human action that produces an incorrect result. [After IEEE 610]

Error guessing:  A  test  design  technique  where  the  experience  of  the  tester  is  used  to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

Exhaustive testing: A test approach in which the test suite comprises all combinations of input values and preconditions.

Exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a  process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing. [After Gilb and Graham]


Fail: A test is deemed to fail if its actual result does not match its expected result.

Failure: Deviation of the component or system from its expected delivery, service or result. [After Fenton]

Failure mode: The physical or functional manifestation of a failure. For example, a system in failure  mode  may  be  characterized  by  slow  operation,  incorrect  outputs,  or  complete termination of execution. [IEEE 610]

Failure Mode and Effect Analysis (FMEA): A systematic approach to risk identification and  analysis  of  identifying  possible  modes  of  failure  and  attempting  to  prevent  their occurrence. See also Failure Mode, Effect and Criticality Analysis (FMECA).

Failure Mode, Effect and Criticality Analysis (FMECA): An extension of FMEA, as in addition to the  basic FMEA, it includes a criticality analysis, which is used to chart the probability  of  failure  modes  against  the  severity  of  their  consequences.  The  result highlights failure modes with relatively high  probability and severity of consequences, allowing remedial effort to be directed where it will produce the greatest value. See also Failure Mode and Effect Analysis (FMEA).

Failure rate: The ratio of the number of failures of a given category to a given unit of measure, e.g.  failures per unit of time, failures per number of transactions, failures per number of computer runs. [IEEE 610]

False-fail result: A test result in which a defect is reported although no such defect actually exists in the test object.


Fault seeding: The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. [IEEE 610]

Fault seeding tool: A tool for seeding (i.e. intentionally inserting) faults in a component or system.

Fault tolerance: The capability of the software product to maintain a specified level of performance  in  cases  of  software  faults  (defects)  or  of  infringement  of  its  specified interface. [ISO 9126]

Fault Tree Analysis (FTA): A technique used to analyze the causes of faults (defects). The technique  visually models how logical relationships between failures, human errors, and external events can combine to cause specific faults to disclose

Frozen test basis: A test basis document that can only be amended by a formal change control process.

Function Point Analysis (FPA): Method aiming to measure the size of the functionality of an  information   system.  The  measurement  is  independent  of  the  technology.  This measurement may be used as a basis for the measurement of productivity, the estimation of the needed resources, and project control.

Functional integration: An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. See also integration testing.

Functional requirement: A requirement that specifies a function that a component or system must perform. [IEEE 610]

Functional test design technique: Procedure to derive and/or select test cases based on an analysis  of  the  specification  of  the  functionality  of  a  component  or  system  without reference to its internal structure.

Functional testing: Testing based on an analysis of the specification of the functionality of a component or system.

Horizontal traceability: The tracing of requirements for a test level through the layers of test documentation  (e.g. test plan, test design specification, test case specification and test procedure specification or test script).


Impact analysis: The assessment of change to the layers of development documentation, test documentation  and  components,  in  order  to  implement  a  given  change  to  specified requirements.

Incident: Any event occurring that requires investigation. [After IEEE 1008]

Incident logging: Recording the details of any incident that occurred, e.g. during testing.

Incident management: The process of recognizing, investigating, taking action and disposing of incidents. It  involves logging incidents, classifying them and identifying the impact. [After IEEE 1044]


Incremental testing: Testing where components or systems are integrated and tested one or some at a time, until all the components or systems are integrated and tested.

Independence of testing: Separation of responsibilities,which encourages the accomplishment of objective testing. [After DO-178b]

Informal review: A review not based on a formal (documented) procedure.

Inspection: A type of peer review that relies on visual examination of documents to detect defects, e.g.  violations of development standards and non-conformance to higher level documentation.  The  most  formal  review  technique  and  therefore  always  based  on  a documented procedure. [After IEEE 610, IEEE 1028]

Iterative development model: A development life cycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product, a subset of the final product under development, which grows from iteration to iteration to become the final product.

Keyword driven testing: A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test.

LCSAJ: A  Linear  Code  Sequence  And  Jump,  consisting  of  the  following  three  items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable  statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence.

LCSAJ coverage: The percentage of LCSAJs of a component that have been exercised by a test suite. 100% LCSAJ coverage implies 100% decision coverage.

LCSAJ testing: A white box test design technique in which test cases are designed to execute LCSAJs.

Management review: A systematic evaluation of software acquisition, supply, development, operation,  or  maintenance  process,  performed  by  or  on  behalf  of  management  that monitors progress, determines the status of plans and schedules, confirms requirements and their  system  allocation,  or  evaluates  the  effectiveness  of  management  approaches  to achieve fitness for purpose. [After IEEE 610, IEEE 1028]

Mutation analysis: A method to determine test suite thoroughness by measuring the extent to which a test  suite can discriminate the program from slight variants (mutants) of the program.

Off-the-shelf software: A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.

Path coverage: The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.

Peer review:
A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.

Performance: The degree to which a system or component accomplishes its designated functions  within  given constraints regarding processing time and throughput rate. [After IEEE 610]

Priority: The level of (business) importance assigned to an item, e.g. defect.

Probe effect: The effect on the component or system by the measurement instrument when the component or system is being measured, e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being used.

Project: A project is a unique set of coordinated and controlled activities with start and finish dates  undertaken  to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources. [ISO 9000]

Project risk: A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc.

Pseudo-random: A series which appears to be random but is in fact generated according to some prearranged sequence.

Quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]

Quality assurance: Part of quality management focused on providing confidence that quality requirements will be fulfilled. [ISO 9000]

Quality attribute: A feature or characteristic that affects an item’s quality. [IEEE 610]

Quality management: Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard to quality generally includes the establishment of  the  quality  policy  and  quality  objectives,  quality  planning,  quality  control,  quality assurance and quality improvement. [ISO 9000]

Regression testing: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is  performed when the software or its environment is changed.

Release note: A document identifying test items, their configuration, current status and other delivery information delivered by development to testing, and possibly other stakeholders, at the start of a test execution phase. [After IEEE 829]

Requirement: A condition or capability needed by a user to solve a problem or achieve an objective that  must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. [After IEEE 610]

Requirements-based testing: An approach to testing in which test cases are designed based on test  objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

Requirements  management  tool:  A  tool  that  supports  the  recording  of  requirements, requirements   attributes   (e.g.  priority,   knowledge   responsible)   and   annotation,   and facilitates    traceability    through    layers    of    requirements    and    requirements    change management.  Some  requirements  management  tools  also  provide  facilities  for  static analysis, such as consistency checking and violations to pre-defined requirements rules.

Requirements phase:  The  period  of  time  in  the  software  life  cycle  during  which  the requirements for a software product are defined and documented. [IEEE 610]

Re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

Retrospective meeting: A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.

Review: An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough. [After IEEE 1028]

Reviewer: The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.

Risk: A factor that could result in future negative consequences; usually expressed as impact and likelihood.

Risk  analysis:  The  process  of  assessing  identified  risks  to  estimate  their  impact  and probability of occurrence (likelihood).

Risk-based testing: An approach to testing to reduce the level of product risks and inform stakeholders on  their status, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.

Risk control: The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.

Risk identification: The process of identifying risks using techniques such as brainstorming, checklists and failure history.

Risk level: The importance of a risk as defined by its characteristics impact and likelihood.The level of risk can be used to determine the intensity of testing to be performed. A risk level can be expressed either qualitatively (e.g. high, medium, low) or quantitatively.

Risk  management:  Systematic  application  of  procedures  and  practices  to  the  tasks  of identifying, analyzing, prioritizing, and controlling risk

Risk type: A specific category of risk related to the type of testing that can mitigate (control) that  category.  For  example  the  risk  of  user-interactions  being  misunderstood  can  be mitigated by usability testing.

Root cause:
A source of a defect such that if it is removed, the occurance of the defect type is decreased or removed. [CMMI]

Root cause analysis: An analysis technique aimed at identifying the root causes of defects. By directing  corrective measures at root causes, it is hoped that the likelihood of defect recurrence will be minimized.

Safety: The capability of the software product to achieve acceptable levels of risk of harm to people, business, software, property or the environment in a specified context of use. [ISO9126]

Scalability: The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]

Scalability testing: Testing to determine the scalability of the software product.

Scribe: The person who records each defect mentioned and any suggestions for process improvement during a review meeting, on a logging form. The scribe has to ensure that the logging form is readable and understandable.

Security: Attributes of software products that bear on its ability to prevent unauthorized access,  whether  accidental  or  deliberate,  to  programs  and  data.  [ISO  9126] 

Severity: The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]

Site acceptance testing: Acceptance testing by users/customers at their site, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes, normally including hardware as well as software.

Smoke test: A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices.

Software life cycle: The period of time that begins when a software product is conceived and ends when the  software is no longer available for use. The software life cycle typically includes a concept phase,  requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and sometimes, retirement phase. Note these phases may overlap or be performed iteratively.

Software quality: The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]

Specification: A document that specifies, ideally in a complete, precise and verifiable manner, the requirements, design, behavior, or other characteristics of a component or system, and, often, the procedures for determining whether these provisions have been satisfied. [After IEEE 610]

State diagram: A diagram that depicts the states that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]

State table: A grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions.

State transition: A transition between two states of a component or system.

State transition testing: A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing.

Statement: An entity in a programming language, which is typically the smallest indivisible unit of execution.

Statement coverage: The percentage of executable statements that have been exercised by a test suite.

Statement testing: A white box test design technique in which test cases are designed to execute statements.

Static testing: Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.

Stress testing: A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads, or with reduced availability of resources such as access to memory or servers. [After IEEE 610]

Stub:
A skeletal or special-purpose implementation of a software component, used to develop or  test  a  component  that  calls  or  is  otherwise  dependent  on  it.  It  replaces  a  called component. [After IEEE 610]

Suspension criteria: The criteria used to (temporarily) stop all or a portion of the testing activities on the test items. [After IEEE 829]

System  of  systems:  Multiple  heterogeneous,  distributed  systems  that  are  embedded  in networks at multiple levels and in multiple domains interconnected addressing large-scale inter-disciplinary common problems and purposes.

System  integration  testing:  Testing  the  integration  of  systems  and  packages;  testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet).

Technical review: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. [Gilb and Graham, IEEE 1028]

Test: A set of one or more test cases. [IEEE 829]

Test approach: The implementation of the test strategy for a specific project. It typically includes the  decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.

Test  automation:  The  use  of  software  to  perform  or  support  test  activities,  e.g.  test management, test design, test execution and results checking.

Test basis: All documents from which the requirements of a component or system can be inferred. The  documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis. [After TMap]

Test case: A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. [After IEEE 610]

Test Maturity Model (TMM): A five level staged framework for test process improvement, related to the  Capability Maturity Model (CMM), that describes the key elements of an effective test process.

Test Maturity Model Integrated (TMMi): A five level staged framework for test process improvement, related to the Capability Maturity Model Integration (CMMI), that describes the key elements of an effective test process.

Test oracle: A source to determine expected results to compare with the actual result of the software under  test. An oracle may be the existing system (for a benchmark), a usermanual, or an individual’s specialized knowledge, but should not be the code. [After Adrion]

Test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process. [After IEEE 829]

Test planning: The activity of establishing or updating a test plan.

Test policy: A high level document describing the principles, approach and major objectives of the organization regarding testing.

Test Point Analysis (TPA): A formula based test estimation method based on function point analysis. [TMap]

Test procedure specification: A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]

Test process:  The fundamental test process comprises test planning and control, test analysis and design, test implementation and execution, evaluating exit criteria and reporting, and test closure activities.

Test Process Improvement (TPI): A continuous framework for test process improvement that describes the key elements of an effective test process, especially targeted at system testing and acceptance testing.

Test progress report: A document summarizing testing activities and results, produced at regular intervals,  to report progress of testing activities against a baseline (such as the original  test  plan)  and  to  communicate  risks  and  alternatives  requiring  a  decision  to management.

Test reproducibility: An attribute of a test indicating whether the same results are produced each time the test is executed.

Test schedule: A list of activities, tasks or events of the test process, identifying their intended start and finish dates and/or times, and interdependencies.

Test script: Commonly used to refer to a test procedure specification, especially an automated one.

Test session: An uninterrupted period of time spent in executing tests. In exploratory testing, each test session is focused on a charter, but testers can also explore new opportunities or issues during a session. The tester creates and executes test cases on the fly and records their progress

Test  specification:  A  document  that  consists  of  a  test  design  specification,  test  case specification and/or test procedure specification.

Test strategy: A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).

Test suite: A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

Top-down testing: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. See also integration testing.

Traceability: The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.

Usability: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions. [ISO 9126]

V-model: A  framework  to  describe  the  software  development  life  cycle  activities  from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.

Validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]

Variable: An element of storage in a computer that is accessible by a software program by referring to it by a name.

Verification: Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]

Vertical  traceability:  The  tracing  of  requirements  through  the  layers  of  development documentation to components.

Volume testing:
Testing where the system is subjected to large volumes of data.

Walkthrough:
A step-by-step presentation by the author of a document in order to gather information  and  to  establish  a  common  understanding  of  its  content.  [Freedman  and Weinberg, IEEE 1028]

White-box test design technique: Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.

White-box testing: Testing based on an analysis of the internal structure of the component or system.

Wide Band  Delphi:  An  expert  based  test  estimation  technique  that  aims  at  making  an accurate estimation using the collective wisdom of the team members.

Wild pointer: A pointer that references a location that is out of scope for that pointer or that does not exist.

No comments:

Post a Comment