Software Testing Terminology ! ISTQB Glossary

Software Testing Terminology  Glossary or ISTQB Glossary: Welcome to another new post of the manual testing tutorial series. This post where is going to learn about different software Testing Terminology that is frequently used by the QA tester.

We prepared this software testing glossary to help you in familiar with words and phrases commonly used in testing and requirements work. In this software testing terminology glossary, if you found any missing software testing terminology.

Or, if you feel that you know the definition of any term better than mentioned here, then you can specify that in the comment section. We will review that and include that missing software testing terminology In our glossary list.

Software Testing Terminology

We are trying to describe the software testing terminologies’ definitions straightforwardly so that everyone can understand them easily.

If you’re a new software tester, you’ve probably been hearing lots of acronyms and jargon that were unfamiliar to you. And if you’re a hiring manager, you may be struggling to understand what some of these words mean.

Many of these are specific to software and IT, but most are used throughout the testing world. I hope that this will help demystify some of the terms you hear and give a better idea of what software testers actually do.

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

Because I got tired of googling for glossary terms on software testing during my education. Here’s a collection of some of the more common ones:

A

  • Acceptance criteria: The specific conditions which must be met by a product to satisfy the user’s or other stakeholder’s requirement.
  • Acceptance testing: Testing to verify that a delivered product or service meets the acceptance criteria (or satisfies the users) of an organization. The focus is on “customer” requirements, which may be defined by business analysts, managers, users, customers, and others.
  • Accessibility testing: Verifying that a product works for all audiences and excludes no one due to disability or hardware/software limitations.
  • Actual result: The actual outputs or data produced by a piece of software, be it the result of an expected operation or the effect of unexpected input.
  • Ad hoc testing: Formal testing is done without planning, documentation, or any formalized way of doing things; it is usually ad-hoc (“as you go”) because it can be difficult to test a new feature using only the old features.
  • Agile development: An iterative method of software development that emphasizes evolutionary, feedback-driven programming with rapid and continuous testing. This approach aims to make minor design improvements based on customer feedback as soon as possible so that major changes are made before the code has become overly complicated to change.
  • Alpha release: The first public release of a product, or the first stage of development. The main purpose of an alpha release is to test core functionality and major design concepts with real users within a limited scope.
  • Alpha test phase: The period during software development just after unit testing, where major flaws have been eliminated from the system, but before all intended functionality has been implemented.
  • Alpha testing: An initial test phase that is limited in scope, time, and/or several participants, and focuses primarily on internal functionality. It is usually conducted by developers or other members of the development team; anyone outside this group may be involved in beta testing.
  • Ambiguous requirement: A requirement that has more than one possible interpretation you have to figure out which was intended, either by consulting the original author/stakeholder or by testing the feature.
  • Anomaly: Any deviation from the expected behavior of a program. Some anomalies indicate errors in the program, and others may be unexpected but correct behavior.
  • API (application program interface):  a set of routines, protocols, and tools for building application software.
  • Application crash: A program error that causes it to end abnormally.
  • As-Built: The final product produced by software developers; may be different from “as designed” because of bugs, changes to scope, schedule slippage, etc.
  • As Designed: The current state of a program following development and testing. It is the actual working version, not the intended or planned one.
  • Assertions: Comments in a program that test certain conditions and are expected to be true at all times.
  • Assumption: A belief or condition upon which an argument, plan, or action is based.
  • Audit: An inspection or other methods of determining whether a system’s security matches its policy.
  • Automated smoke test: A broad sanity test for a new build or release that is run automatically using an automated mechanism and provides a quick overview of the current level of code health.
  • Automated testing: Any tests which are performed by software tools as opposed to human testers (see also “Manual Testing“). This may include manual intervention, such as fixing the test after it fails (for example selecting values from a list) but the goal is to allow human testers to focus on higher-level tasks rather than low-level implementation details which can be tedious, tedious, and boring.
  • Availability trail: A chronological account of all the tests, changes, and bug fixes that have been applied to a program. This is useful in tracking backward through source code files or other versions so that the entire evolution of a program can be reconstructed.

B

  • Beta test milestone: A release date on which software developers finish implementing changes to a program and are ready for external testing
  • Beta testing: A test phase usually conducted by primary users, customers, or other interested parties.
  • Big-bang integration: An integration strategy where newly developed source code is completely merged into the production environment at one time.
  • Black box testing: An approach that ignores the internal implementation details of a software product and instead tests the features from the outside as though they were “black boxes.” programs.
  • Blocker: Any bug that prevents a program or its parts from working, either partially or entirely.
  • Bottom-up integration: An integration strategy in which the individual components are tested separately and then linked together into a whole.
  • Boundary value analysis: A test control technique in which the software is exercised at its boundaries, or extremes, of input values to detect abnormal results.
  • Bounding box testing: A form of black-box testing that addresses all possible boundary conditions for a program and verifies that they work correctly. This includes validating the data entered from the keyboard by comparing it with what the screen shows and also includes validating that the results of calculations made by the program are entered correctly into other programs.
  • Branch Coverage: The proportion of code executed by the test suite. Coverage should be as high as possible.
  • Branch version: A new version of a program that is created from the production code which has been amended to fix a specific problem or include new features. This new version may be distributed only internally or externally on another release cycle, depending on its nature and importance. If it is sold as an upgrade to licensed users, it should be numbered like the original version.
  • Breakage: Any unexpected, improper changes made to a program by inadvertent human intervention.
  • BS 7925-1: A British Standard which specifies a process for software development.
  • BS 7925-2: A British Standard which specifies a process for software maintenance.
  • Buddy system: A testing strategy in which two testers work together, taking turns at the computer and the documentation as they test the product together.
  • Bug: Any software defect or flaw in a computer program that causes it to operate incorrectly. A bug differs from errors in design because bugs usually result from mistakes in coding rather than faulty logic within the software’s architecture.
  • Bug Bash: An event held by software developers to solve as many bugs as possible in a given time frame. Usually comes with rewards for success!
  • Bug defer: Deferring the resolution of a bug until a future release of a program, either temporarily or permanently.
  • Bug leakage: An unintended “side effect” that results when correcting one bug introduces another malfunction to the program.
  • Bug tracking: A software tool, such as Jira or Bugzilla, used for recording defects and other issues in a program and their resolution.
  • Bug tracking system: A computer program in which defects or other issues in a program are identified and recorded. Also called issue tracking systems, defect tracking systems, or trouble ticketing systems.
  • Bug triage: A structured process for assessing and prioritizing defects, usually performed by Quality Assurance specialists during software development.
  • Build- A collection of software modules, program files, and documentation derived from the source code developed by a specific development team to test or verify it at some point during its life cycle. In addition to compiled binary code, this could also include other deliverables such as white papers, design documents, or test plans.
  • Build automation: A software tool that automates the process of compiling, assembling, and linking computer programs and libraries.
  • Burn-in period: A time during the development of a computer program when potential errors are exposed by operating the program Defect.
  • Bug Life Cycle: A defect goes through five stages. It starts when it is first identified and ends when it has been resolved.
  • Bug life cycle process: A set of tasks that must be completed to resolve or close a defect in the software.
  • Bug release: A software release that contains bugs.
  • Bug report: A document describing one or more defects found in the software, usually by a tester or end-user through defect reports and/or testing. Also called “defect report” or “problem report”.
  • Bug scrubbing: A software development technique that involves checking collected bugs for duplicates, or non-valid bugs, and resolving these before any new ones are entered.
  • Bug Triage Meeting: An event during software development led by a QA manager or test lead. This meeting is used to prioritize new defects found in production or testing environments. This meeting is also used to re-prioritize closed defects and perform a bug scrubbing process. The purpose of the bug triage meeting is to organize bug defects in order of importance so the most severe bugs can be fixed first.
  • Bug triage phase: The period of time during the software development life cycle that is devoted to bug triage.
  • Build verification test: A software test that is designed to ensure that a properly built program will execute without unexpected results, and may be used as a final check before the program is distributed.

C

  • Capture/playback tool: A tool that can record user actions and replay them in a specified environment, usually for regression testing or functional/acceptance testing.
  • CAST: Chartered Association of Testing and Standardization
  • Categorizing bugs: Classifying the defects according to their general nature e.g. critical, major, etc. Critical bugs are those which have a direct impact on the software. Manual Testing is done to check these bugs.
  • Change control board (CCB): A group of individuals responsible for approving changes to the requirements, design, or code.
  • Change management: The part of software development that is concerned with the management of changes to the requirements, design, code, and documentation. Change control boards or change advisory boards are normally responsible for making judgments on proposed changes.
  • Change request: A formal request for software change that has been approved by the sponsor.
  • Checking: The process of verifying, using a systematic procedure, whether or not an element of a design or product complies with its specifications and is fit for purpose.
  • Checklist: A list of features and functions which must be tested before the software is accepted. Checklists are usually derived from a user requirements document or an older version of the software specification.
  • Checkpoint: Points in the project where a snapshot of the software can be taken for future evaluation. Checkpoints are usually scheduled to evaluate whether certain targets, such as quality or productivity goals have been met.
  • Clearbox: A box containing items relevant to testing, such as a checklist, test scripts, and any other tools needed for testing. The contents of the box can be accessed by a tester or anyone else who needs to perform testing, to ensure that there is an agreed standard for all aspects of testing.
  • Client: An end-user of a system/application under development.
  • Code Coverage: Measures the amount of code exercised by a test suite.
  • Code Freeze: The code freeze is the time when code changes stop being merged into the code base for a particular release. In the past, it has been done for a variety of reasons, but most often to release a new version of an operating system. It is generally recommended to freeze as soon as possible so that there is no bug fixing on the feature branch before the new release hits the market and becomes widely used.
  • Code injection: The ability to embed a test or breakpoint into the software which is executed during testing.
  • Code reviews: An informal meeting at which the design and code of the software are inspected by a programmer and/or other peers. The purpose of this meeting is to examine code quality and detect errors, and eliminate or reduce the number of defects before the software is released.
  • Code standard: The set of rules or guidelines that a programmer must adhere to when programming.
  • Code walkthrough: A software development technique in which programmers meet to examine code line by line, discuss the rules and their application, and resolve any issues regarding coding standards. This meeting is usually held among programmers to discuss the functionality of certain parts of code.
  • Cohesion: The degree to which the elements within a software component belong together. Software components are more cohesive if they work towards a common goal. Components with low cohesion are more difficult to understand.
  • Comparison testing: An informal test performed to determine the value of a new feature for an existing feature.
  • Compatibility testing: The process of ensuring that software will work with other systems or a specific platform. Compatibility testing is usually conducted either manually by performing tests on the software using different computer platforms, or automatically by simulating and running tests in various environments.
  • Compilation: The process of taking source code and compiling it into executable machine language.
  • Compile-time: Time measured from the beginning to the end of a build.
  • Completion Criteria: The level of competence required to complete a particular phase, activity, or type of testing. Completion criteria are usually documented in some way.
  • Complexity: The inherent difficulty of an application, system, or problem. Complexity is distinct from other factors such as usability or performance. When a software project approaches the limits of complexity without adding more resources to help manage it, it can start to produce errors or undesirable results. Complexity is a characteristic of the problem to be solved and not just the software that is trying to solve it.
  • Compliance: The degree to which software testing complies with the standards of the industry.
  • Component: A component is usually self-contained and sufficiently independent of other components that can be developed, tested, and maintained separately from the other parts of a system or application. Components are meant to decrease complexity by allowing functionality to be separated into logical units. A component may be a full-fledged service, or it can be an abstract concept, such as a set of functions that share some common structure and behavior.
  • Component integration testing: A type of system testing which determines whether all components interact correctly when used together during normal operations. This type of testing is usually performed late in the development process after all components have been successfully coded and tested individually, but before integration testing.
  • Component testing: The process of testing the individual components behind a software system to ensure that any faults in one component do not affect the others. Component tests are small-scale tests that are usually run as the first tests in a testing cycle.
  • Concurrency Testing: The process of testing software operating on multi-user systems that are open to and actively receiving requests from users or other systems at any time, including concurrent running applications. Concurrency Testing is used to ensure that the application can handle multiple users accessing and using different parts of the system so that the response time of each user remains constant.
  • Condition coverage: A measure that defines the degree to which test cases identify (cover) all possible paths through a program. Ideally, for every branch, there should be one and only one test case covering the branch.
  • Conditional Sign-off: The approval of a software release that is conditional upon receipt and acceptance of additional deliverables.
  • Configuration Item (CI): A CI, which may be hardware or software, is the smallest deployable unit from one baseline to another. It should not be an aggregation of other CIs but rather a single element. For instance, in the case of Software, it could be an individual file or an executable.
  • Configuration management: The part of software development that is concerned with keeping track of which version(s) of the requirements, design, and code are in use at any point in time.
  • Configuration management testing: The process of making sure that new releases don’t adversely affect the operation of any other software already in place.
  • Configuration testing: Test conducted to ensure that the configuration of a software product, or element therein, has not been modified inadvertently since it was last tested. The result of such testing is often documented in a Configuration Baseline Report (CBR).
  • Context-driven testing: A testing method that relies on domain knowledge and heuristics to derive test cases.
  • Contingency Plan: A plan to be applied, if the main plans fail unexpectedly. Such a plan includes alternate ways and means of achieving the same objectives
  • Continuous integration (CI): A system used to automate the process of building, testing, and releasing an application. Existing development tools such as CruiseControl build a project’s source code and execute its tests whenever a change has been committed. The most significant advantage of Continuous Integration is the ability to identify integration problems early, which reduces the overall cost of fixing them.
  • Control-flow graph (CFG): A diagrammatic representation of the program’s flow of control, with nodes representing decision points and arcs representing the possible sequences through those decision points. Control-flow graphs are commonly derived from the extracted flow charts created during structured programming development.
  • cornerstone: A test case that is used as a template for test case development in other areas of testing, such as boundary value analysis or error seeding.
  • Corrective actions: The action to be taken when a bug is found during testing.
  • Cost-benefit analysis: The process of determining whether the cost of a testing effort will equal or exceed its benefits.
  • Coverage: The degree to which a test (or set of tests) exercises the code being tested. There are several methods of measuring this, such as statement coverage, branch coverage, and condition coverage.
  • Crash: Testing aimed at determining and exploiting the failure modes of an application’s processes to invoke unhandled exceptions or otherwise cause the program to crash as early detection of potential failures.
  • Criteria: A set of rules that are applied to the testing results obtained using one technique to determine whether they conform or not concerning the objectives established for a given test.
  • Criticality: The importance of a requirement (or requirement set) concerning meeting the objectives set forth for a testing project. Criticality is commonly expressed as high, medium, or low.
  • Criticality of testing: The importance of a test case for meeting the objectives set forth for a testing project. Criticality is commonly expressed as high, medium, or low.
  • Critical-path testing: An informal term for the process of determining which tests should be performed first. When used without qualifiers, it refers to those tests whose execution must precede “all” other tests (i.e., all others that are currently planned). The same technique can also be used in a more general context to determine a series of tests whose execution must precede “all” others, including both planned and unplanned testing.
  • Cross-platform compatibility: The ability of a software product, usually an operating system, to be used on more than one hardware platform.
  • Cross-browser testing: The process of testing a website from one or more browser platforms.
  • Cross-platform software: Software that can be run on multiple operating systems.
  • Crowd testing: An outsourcing model where a company provides access to their own products and services for free to a community of voluntary testers who are incentivized by being able to test the product as they would normally do but with additional benefits including giving public feedback on social media outlets such as Twitter and Facebook.
  • Customer Acceptance Testing (CAT): Testing conducted by the end-users/customers, that determines whether they accept the software as meeting their needs and satisfaction of stated requirements. Cat focuses on how well the users can work with the software. Note that CAT should not be confused with beta testing done by external testers and customers.
  • Cycle time: The duration of time it takes to complete each iteration or sprint in an Agile project, usually measured in days or weeks. This is a key metric because shorter cycle times mean more opportunities for teams to inspect and adapt.

D

  • Daily build: A daily version of a software program that is made available to the development team for internal use.
  • Deadlock: A situation in which two or more threads are waiting indefinitely for an event to occur.
  • Debugging: A software development activity whose purpose is to remove or correct errors in a program.
  • Decision table: One of the two techniques used to create decision test cases. A decision table is a concise tabulation of all possible combinations of input values and their corresponding outputs, both true and false.
  • Decision Table testing: A test case design technique in which logic conditions are used to select one of a set of actions.
  • Defect: An error or flaw in an existing product that causes it not to perform its intended function, cause incorrect results, or otherwise behave unexpectedly.
  • Defect report: A document that records and communicates information about a product or system deficiency, sometimes also defining the nature and cause of the problem.
  • Defect tracking system: A software tool used to track defects throughout their life cycle in such a way that they can be easily retrieved for future reference.
  • Deliverable: An objective artifact produced during an activity or phase in the software development process. For example, a test case is a deliverable from a test case design activity.
  • Delphi Technique: A technique for collecting the opinions of a team or group of experts by soliciting ideas via an “online”, electronic forum.
  • Dense testing: Determining whether all boundary values have been tested.
  • Dependency: Any reference within one product to another for its proper execution and/or successful completion. In software, it usually refers to a requirement upon another module or program that must be satisfied before the given module or program can function correctly.
  • Design for testability: A systematic approach to designing products and components with the ease of testing.
  • Desk checking: Verification of test documentation. An approach to quality assurance that relies on checking the work against a fully documented and standardized procedure or protocol including peer review. Also called “administrative testing.”
  • Difficult test case: A test case that is difficult to design and execute.
  • Difficulty: The extent to which a test case design technique requires skill and effort relative to other comparable techniques.
  • Document review: A test review technique that provides for the systematic examination of a document against its requirements or using some other objective standard. Each requirement is reviewed by one or more reviewers who consider it from two perspectives:– Did the author correctly understand and apply the requirements?
    – Was the document written by procedures, standards, style guides, etc.?
  • Domain Expert: An individual who is knowledgeable and experienced in a particular application area. Such individuals may provide information on the specific requirements for a given component or system; they may also be asked to participate in the testing process, either by serving as product testers themselves or by providing written feedback on test design techniques and results.
  • Downtime: The period of time when a computer or computer system is not operating correctly.
  • Driver: A test case that controls the behavior of another test object. The driver provides input data and/or another stimulus to the controlled object such as a program module or hardware component under test.
  • Dry-run analysis: A test case design technique in which risk factors are identified and addressed by generating test cases that cover all portions of the program that might be affected.
  • Dynamic testing: An approach to software testing in which tests are devised for executing (running or executing) a program. These tests are based on the program’s input or data, which may be controlled (known) or unknown, and the results observed (or not).

E

  • Edge Case: A test case that is designed to exercise code handling the exceptional or “fringe” conditions of a system.
  • EEC Software Testing Standards Committee: EEC-CSTC. A committee of the European Economic Community (EEC), active in 1985–88, to promote the development and application of software engineering technology in EEC countries. The committee adopted the IEEE Standard 829 as a starting point for defining useful test documentation formats but viewed this format strictly as a guideline rather than a prescription for producing effective test documentation.
  • Effort (Test): The amount of work required to perform some action; the level of effort needed for a given level of performance may be expressed in terms of time, personnel, and other resources necessary to accomplish the task.
  • Emulator: A hardware or software system that duplicates the functionality of another system.
  • End-to-end test: A test to verify the operation of a complete application system by simulating the real-world environment in which it will (ideally) operate.
  • Entry criteria: The characteristics an item must possess if it is to be accepted for further consideration (“to pass the sniff test”).
  • Entry-point testing: A test case design technique that involves identifying and defining all the points at which a program will be presented with data to detect incorrect responses.
  • Equivalence partitioning (EP): Partitioning the input domain of a function, component, or system under test into classes of inputs that produce the same output (equivalence classes).
  • Equivalence Regression Testing: A test design technique that relies on a certain property of program inputs, called an equivalence relation. An input is determined to be equivalent to another if substituting one for the other would yield the same results. Equivalence partitioning uses this relation as a basis for choosing test data, selecting the boundary values of equivalence classes.
  • Error: An action or process that produces an effect different from the one expected.
  • Error description: A description of the behavior produced by an incorrect input or execution condition, often in terms of the expected result.
  • Error guessing: A test design technique in which the tester manually searches for error-causing inputs, using knowledge of the program and heuristics to surmise likely candidates.
  • Error seeding: In program testing, the deliberate introduction of predetermined errors to be discovered during software tests.
  • Escape analysis: A Control flow and Data flow testing technique that uses path coverage metrics to measure the completeness of test cases.
  • Estimate: The degree or amount of anything abstracted or inferred; a guess, usually based on incomplete data.
  • Execution path: The particular sequence or series of steps followed by a computer program from start to finish as it runs.
  • Exhaustive testing: A test strategy that involves executing every possible path through a program or system to verify correct operation.
  • Exit criteria: A set of conditions that must be fulfilled before a test case is considered complete.
  • Expected result: The result that is expected under the normal conditions of the test case. Also called ‘expected value’ or ‘desired value’.
  • Experimental testing: A test design technique that involves executing a program under controlled conditions and monitoring its behavior and performance.
  • Expertise: A form of knowledge that has been acquired, built up, or learned over time through experience and training.
  • Explicit testing: Testing that relies on formal procedures for the setup, control, and execution of tests.
  • Exploratory testing: An exploratory testing is a methodical approach to software testing where the tester, with the help of test design techniques like divide and conquer, move down into the code based on some clues from a higher level. Exploratory testing can be considered as a form of structured unstructured testing. This is an ad-hoc process where the tester investigates the application manually, using their knowledge, skills, and experience to determine where the most promising areas are for testing.
  • External supplier:  A supplier of services or products that is not part of the organization being served.
  • External testing: Testing performed by parties not normally involved in the production of a software product.
  • Extreme programming: A software development methodology that advocates frequent releases in short development cycles (time-boxing), close contact with customers, and planning game.

F

  • Factory acceptance test: A test performed by the supplier that demonstrates the readiness of a system or subsystem to enter full-scale production use.
  • Failover: A mechanism that uses two or more systems to provide fault tolerance in the event of a system failure. In many cases, one system is active, and the backup is inactive and idle until needed.
  • Failure: A condition or event that causes a program to terminate abnormally or produce incorrect results.
  • False-negative: A report that program condition is not present when it actually exists. False-negative errors are sometimes called ‘missed errors’.
  • False-positive: A report that program condition is present when it actually does not exist. This type of error may result from an incorrect configuration or testing environment, or from a false assumption made by the program itself in interpreting its results. Error messages about incorrectly formatted input data, for example, may result in a false-positive error if the program contains incorrect assumptions about the format and nature of its inputs.
  • Fault: A discrepancy (incorrectness or incompleteness) between a planned or expected condition and an actual occurrence, such as failure of equipment, resources, or people to perform as intended.
  • Fault Injection: The process of intentionally introducing faults (errors) into a computer program to test the robustness of the application.
  • Feature: A distinct capability of a software product that provides value in the context of a given business process.
  • Feature test: A type of black-box test design technique used to control software changes by examining how they will affect existing features and other behaviors.
  • Flexibility: The ability of a software product to adapt to potential changes in business requirements.
  • Formal review: Testing performed by a group of people who are independent of the author of the document or program under test.
  • Functional integration: The process of combining two or more separate software modules into one or more new, integrated capabilities.
  • Functional testing: Testing that verifies and validates the functionality (functionality) of an application.

G

  • Gamma test:  A black box test design technique for helping to design better requirements and specifications – as compared with traditional techniques that concentrate on the surface syntax of requirements documents.
  • Gherkin: A specification language for describing the expected behavior of the software.
  • Glass box testing: Testing that examines program internal structures and processes to detect errors and determine when, where, why, and how they occurred.
  • Goal: A description of the expected outcome(s) when the program under test is executed.
  • Graphical User Interface: An interface that uses graphics to communicate instructions and information to the computer user.
  • Gray-box testing: A combination of white-box and black-box test design techniques that enables a tester to determine the internal structures, processing, inputs, and outputs of a program while examining how the inputs affect the outputs.
  • GUI Testing: Testing that verifies the functionality of a Graphical User Interface.

H

  • Hardening: Synonymous with ‘testing.’ The process of finding and removing as many errors as possible, usually carried out when moving from an alpha to a beta product release level (or for customer acceptance testing). Hardware is not often used to support this activity, but the testing process is similar to that used for software.
  • Heuristics: Rules of thumb which the tester applies to finding errors in software. These rules have been derived by experience and may often be very effective, but they are not guaranteed to uncover every defect that might exist.
  • Hierarchy testing: A type of test design technique used with object-oriented applications whereby program units or modules from one level are tested individually
  • Hotfix: A software change that is applied (often at the user site) to address a problem in an operational program.
  • Human error: A fault or failure resulting from an incorrect application of information, lack of appropriate knowledge, training, and skill on the part of personnel; misuse of equipment; improper installation, operation, and maintenance; carelessness or negligence.
  • Hybrid test design technique: A black box test design technique that uses structured and ad hoc techniques.

I

  • IEEE: 829-1998, Standard for Software Test Documentation- A publication of the Institute of Electrical and Electronics Engineers (IEEE) that guides creating a software testing plan and test design documentation.
  • Impact analysis: A process used to gauge the effect of a defect during testing. The objective is to determine how serious the error could be and whether it should be considered a show-stopper, blocking further testing.
  • Impact testing: The process of starting a program or its environment under normal conditions and then subjecting it to abnormal (stress) conditions to determine what errors will be revealed. Impact tests are usually performed at the end of each phase of testing.
  • Impedance matching: Matching the impedance of one element in a circuit or system to that of another, to permit resonance and minimize wastage. The role test scenarios are sometimes played between the development team and quality assurance personnel during the iterative process by which both parties discover errors in requirements, design, coding, and testing.
  • Inbound inspection: Inspecting a program newly received from outside your organization to find major (possibly showstopper) bugs before it is processed any further.
  • Incident: A problem that typically causes the system to go out of control. An incident may require corrective action or be corrected as a result of the identification and correction of other incidents.
  • Incident report: A document submitted to management for each failure that occurs during testing. The incident report should include a summary of the problem, an analysis of its cause, and recommended corrective action. It is usually forwarded by email to the appropriate manager or project leader who will enter it into the defect/incident database for further processing.
  • Incremental integration testing: A form of integration testing whereby the system is tested in parts and then reassembled. The aim is to reduce the cost and time required to integrate the entire system while increasing product stability through a progressive approach.
  • Independent testing: Testing carried out from the perspective of an independent testing organization that is not part of the software development team, as opposed to in-house or contracted testing.
  • Informal review: An informal inspection of documents typically involving two or three persons, intended to check that they make sense and conform to agreed standards and procedures.
  • In-process testing: Testing is conducted as a program is being developed, either by the developer or by other team members. This is in contrast to acceptance testing, which may be carried out manually before program delivery or automatically after it.
  • Insourcing: Hiring people internally to carry out testing.
  • Inspection: A form of peer review in which the object is to find defects, as opposed to providing constructive comments.
  • Install / Uninstall testing: A form of white box testing that checks the software is correctly installed on a computer or system.
  • Installation test: A test of a software product to verify that it is installed correctly.
  • Instrumentation code: Code that is added to a program to collect data for performance monitoring.
  • Integration testing: A black box test design technique that integrates independently developed component programs into larger assemblies before system testing. If the component programs are properly integrated, they should work collectively as a single unit.
  • Interface testing: A black box test design technique that tests the interfaces between separately developed software components for correct operation.
  • Internal consistency testing: The process of checking whether the code under test satisfies the high-level requirements stated in a formal specification or design document.
  • Internal supplier: A supplier who is located within the purchasing organization’s site or facility.
  • Internal testing: The process of checking whether the code under test is fit for its intended purpose; it does not involve checking for compatibility with other systems.
  • International Software Testing Qualifications Board (ISTQB): A body that offers an international qualification in software testing, test management, and test automation. Its syllabus covers a range of areas including project planning, risk analysis, and defect management.
  • Interoperability testing: Black box testing is conducted to ensure that systems or engineering work products from different organizations can interact together correctly.
  • Intra-domain testing: Testing the communications between components of a large system within one domain, to find any faults and create new tests if necessary.
  • Invalid assumption: An assumption that is false, either wholly or partially. This may be due to errors in requirements, design, or coding. Also known as an erroneous assumption.
  • Issue: A problem or defect with a software product. An issue can be based on functionality, performance, usability, or compliance with standards.
  • ISTQB:  The International Software Testing Qualification Board. An organization that offers certifications in several areas of software testing, including test management and test automation. It also specifies a common set of skills and knowledge for each certification level.
  • ISTQB- Advanced syllabus: A syllabus published by the ISTQB to cover the concepts necessary to carry out complex testing activities.
  • ISTQB- Foundation syllabus: A syllabus published by the ISTQB to cover the basic concepts required to carry out typical testing activities.”
  • Iteration/sprint: In agile methodologies for the development of software products; one iteration is an interval of time during which the team works on the full cycle of development from requirements to test and back to development.
  • Iterative development: The process of developing a system or component in stages, with each stage building on previous ones. Iterative development allows changes and refinements throughout the project lifecycle.

J

  • Job security: The belief that if a worker does not do their job they won’t be in a position to collect payment or benefits. This can lead to workers not telling management of problems with products or processes.
  • JUnit: An open-source unit testing framework for the Java programming language. It provides an in-memory test environment to eliminate external dependencies and is used by both manual and automated tests.

K

  • Kanban: A method for managing projects and workflow. Each project or task is represented as a card that is moved through columns, with the progress being tracked by an electronic board.
  • Kick-off meeting: A meeting held at the start of the project to determine goals and objectives for testers on the project. Sprints should also have one at the start of each sprint. All participants need to be present as it can be used to create a project schedule, receive updates from team members on progress and it also serves as a status report for upper management.

L

  • Load testing: A test method that uses the execution of a program with a heavy workload to verify whether or not it can handle the specified volumes.
  • Look and feel The overall appearance of a software product, which includes elements such as layout and design.

M

  • Maintainability: The ability of a software product to be modified and enhanced by adding new features or fixing problems.
  • Maintenance: The work involved in keeping a software product up to date with the latest bug fixes and enhancements after its initial development.
  • Maintenance release: An update to a software product that typically contains minor enhancements and fixes for issues found in previous versions. Maintenance releases are usually released monthly or quarterly by commercial software vendors.
  • Manual testing: A test method in which special forms of input data are used to change the behavior of a software product unexpectedly. By tracking which mutations result in errors, it is possible to identify sections of code that need extra testing.
  • Missing: A defect that prevents the software product from working correctly.
  • Modular testing: Testing is based around the building blocks or modules that make up a software product, which is tested individually before being integrated into full system tests.
  • Module testing: A test method where the software product is tested as a single section in isolation from other sections of the application.
  • Mutation testing: Human-based testing that is carried out by testers on each build of a software product, before the build is released or implemented into production. Manual testing can be conducted during any phase of development but is typically carried out at the end of each sprint. Manual method for functional testing of a software product by a trained tester.

N

  • Naming standard: A standard used by the development team to ensure consistency when naming variables or objects.
  • Negative testing: A test method that looks for ways of breaking the software product. For example, a web page is tested with random values entered into each field to ensure that errors are not produced.
  • New feature: An enhancement to a software product that has not previously been implemented.
  • New Feature Testing: Adding new features to a software product at the end of development and testing them separately from older features.
  • New requirement: A change in the requirements for a software product that is made after it is completed.
  • Non-functional testing: Testing that focuses on testing the quality of a software product. This includes usability, performance, and security tests, which are carried out by testers as well as developers.
  • NUnit: An open-source, unit testing framework for the .NET platform.

O

  • Open source: An open code development method that means anyone can access the source code of software products and make improvements to them.
  • Operational testing: A test performed on a software product to make sure it is suitable for a production environment.
  • Outcome: An outcome of a test is the result or reason for which it was run. For example, if a software product fails validation testing an outcome is that functionality is missing and needs to be added.
  • Out-of-Scope: A defect found by a tester that is not in the original list of defects, but should be added to it.
  • Outsourcing: Moving work out of your business into other businesses or third parties.

P

  • P0 (Product owner): A role on the Scrum team that owns the product and is responsible for managing its development.
  • Pair programming: Pair programming is a technique whereby two developers work on the same piece of code at one time.
  • Pair testing: A test method where two individuals, usually a developer and tester, take it in turns to run tests or run the same test together.
  • Parallel testing: Parallel testing involves running tests at the same time, usually in different environments or on different computers. This allows defects to be identified faster and gives you a higher chance of finding them before release.
  • Path Coverage: A method of testing that looks at all the different paths through a software product to ensure that the code has been thoroughly tested.
  • Peer review: A method of quality assurance where developers test the work of other developers.
  • Performance: An aspect of the quality of a software product. Performance testing ensures that the performance of a software product meets customers’ requirements.
  • Performance Testing: Performance testing focuses on measuring how well a software product performs in a production environment, rather than developing and testing the product’s functionality. It is usually conducted by the QA team after Alpha testing has been completed.
  • Permission Testing: Ensuring that users are not able to access data or content they are not authorized to view.
  • Personalization Testing: Ensuring that personalization setting in a website or application work as they should.
  • POC: POC stands for Proof of Concept. This is a quick prototype to determine the feasibility or suitability of the product. A POC allows you to test out your idea, concept, solution, or code quickly and in an inexpensive way before making any major changes or investments in it.
  • Portability: Testing a program or application on different operating systems, hardware platforms, and software environments to make sure it works.
  • Positive testing: Positive testing verifies that a software product works by running tests.
  • Postconditions: Statements that state the expected result of an action. This may be, for example, calculations or values set to variables after a test has run successfully.
  • Pow Wow: A method of communication used in agile software development where developers and testers meet in short daily meetings to discuss issues, progress, etc.
  • Preconditions: Statements that describe conditions, such as data values or preconditions, that must be met before a test can begin.
  • Prerequisites: A set of actions or conditions that must be satisfied before a test can run. For example, a tester might decide to add some steps to the beginning of every test case as preconditions to ensure that the test is run under the right conditions.
  • Prioritization: An approach to testing where defects are prioritized based on risk factors such as the severity of the defect or how often it occurs.
  • Priority: A rating given to each defect by the tester, usually based on how important the tester believes it is but also based on information in the bug report or defect description.
  • Production: The use of a software product in any environment other than within the development team, for example, in-house or in production.
  • Production defects: Defects found during system testing. A production defect has been missed by developers and testers earlier on in the project.

Q

  • QA Consultant: A person who provides advice and expertise for quality assurance.
  • QA Engineer: A specialist engineer in the field of QA. The role may involve testing, developing, or managing projects, depending on experience and ambition.
  • Quality Assurance (QA): Testing by a third party to ensure that products or services follow defined standards and procedures as well as identify and correct any defects
  • Quality Control (QC): The process of checking the job has been done correctly. QC is an integral part of QA, but QA involves a wider range of activities to ensure that products or services meet defined standards and procedures.
  • Quality metrics: A measurable quantity that can be used to show how good a product or service is.

R

  • Random Testing: A test strategy where the input values for the test are chosen randomly from a range or set of possible values to get a broader testing coverage.
  • Record and playback tool: A tool used in testing where tests are recorded and then played back to verify that the test result is correct.
  • Recovery testing: An activity that attempts to uncover any weaknesses in a fail-safe system, that if activated, will result in the program or system continuing its operations without loss of data or functionality.
  • Regression Testing: A regression test is a set of tests designed to ensure that errors from previous releases or builds have not crept into this release or build. It can also be used as a way of validating new features, i.e. confirm that the new feature has not affected any of the other functions within the product.
  • Release: A version of a software product that has been fully tested and is ready to be released to customers.
  • Release management: The tasks involved in preparing a new release of a software product for distribution including checking that the product meets its defined criteria, coordinating with stakeholders, and ensuring that all relevant documentation is updated.
  • Release Note: The document that goes with a release to explain what is in the release and how it will be supported.
  • Release testing: Testing which looks at the release criteria, plus any other areas that might require extra scrutiny.
  • Reliability: The ability of a product or service to perform satisfactorily and dependably under stated conditions for a specified period of time.
  • Requirements: The functions or qualities, sometimes in great detail, that are expected of a software product. Requirements can be documented either formally or informally and may originate from customers, users, or developers.
  • Requirements specification: A document defining what a product or service is expected to do, including functional and non-functional requirements, specifications, and acceptance criteria.
  • Residual risk analysis: An analysis of risks remaining after mitigation actions have been taken.
  • Resume Testing: A test strategy where the effective path of a program is determined by sequentially adding, deleting, or modifying statements.
  • Retest: To repeat testing after software changes have been made to ensure that the product still meets requirements.
  • Re-testing: Re-testing is testing a software product after changes or fixes have been made to the code. This ensures that any new errors haven’t been introduced by the changes.
  • Retrospective meeting: A meeting, after the main part of a software development project, is completed, where the team meets to discuss what went well during the project and how this may be improved in the future. Also known as a post mortem or lessons learned meeting.
  • Reusability: The capability that a system or module can be used as part of another system.
  • Review: An inspection of a finished work product, usually to verify it meets certain criteria. In software testing, reviews may take place before or after the test to ensure that the test meets certain criteria and is fit for purpose.
  • Review meeting: A follow-up meeting is held between individuals or groups after review activities have taken place. It is used as an opportunity for reflection, learning, refinement, and making improvements in processes, tools, methods, etc.
  • Reviewer: A person who looks at or reviews all or part of a document and provides feedback to the author.
  • Risk-Based Testing: A testing strategy that focuses on finding those things that are likely to cause problems for a software product. This is achieved by looking at each requirement individually as well as looking through the system from end
  • Risk Management: The process of identifying, quantifying then mitigating risks to produce a safe working environment that reduces the threats and vulnerabilities faced.
  • Robustness: The ability of a program or system to function in an error-free manner when faced with invalid inputs or unexpected conditions.
  • Roll-out test: A test that is designed to check whether the changes made during a development project have had any impact on the rest of the product.
  • Root Cause: The actual reason something has gone wrong.
  • Root cause analysis: The process of investigating the underlying causes of a particular problem. or you can say An analysis of why a particular defect occurred. This will involve discussing possible causes with all members of the development team, including developers and testers, before deciding what action should be taken to prevent it from happening again.
  • RUP (Rational Unified Process): A software project and product development methodology, based on the waterfall model. The Rational Unified Process was developed by Rational Software Corporation as an alternative to the waterfall model of SDLC. It is suitable for very large projects and those that require expert knowledge in one or more specialized areas.

S

  • Sandwich integration: An integration testing strategy that begins with the end-user, moves to the component test, and then on to system testing.
  • Sanity testing: A testing process to ensure that, at the outset of software development, the product works. It’s often used by managers or customers when they believe there is little risk of introducing major defects into a program and it can reduce the time needed for later, more thorough testing.
  • Scalability: The ability of a software product to run on larger hardware.
  • Scalability testing: A type of testing that ensures the application performs at its best when faced with the increasing workload as measured by the number or size of transactions, processor usage, etc.
  • Scenario: A description of how things could work in the future. This is often used as the basis for wish lists and functional requirements documents, but can also be used when people are trying to describe what something should do (rather than merely describing how it does work).
  • Scope of Testing: Testing is done to ensure that products are fit for their intended purpose, but what the product is and how well it does its job can be very different things. Testing can also be done to find out more about a product—to see if it does what you think it will or if there are other features available that might help solve your problem.
  • Screening test: A software testing technique that involves initially running a set of predefined tests against all requirements and through the complete system from end-to-end.
  • Script: A list of commands that can be used to control the execution of the program being tested.
  • Scrum (a framework for agile development): The Scrum framework is one of several agile product development processes. It can be used to develop anything from small pieces of software to large systems.
  • Scrum (development framework): Scrum is a simple yet powerful process for managing and completing even the most difficult project. It works equally well for small projects and large, complex programs. The word “scrum” comes from rugby, where it refers to a formation used when restarting play.
  • Scrum Master: A role in Scrum teams that is responsible for making sure the team remains focused on achieving the sprint goal and that daily standup meetings take place.
  • SDD (Software Design Document): A document that describes how a system will work; it is the first step in the design process.
  • SDLC (Software Development Lifecycle): The software development lifecycle is a model or framework used to plan, document, and control the process of developing an IT system.
  • Security testing: The part of the software development process that determines how well a system or program adheres to security policies. The main purpose of security testing is to make sure that security threats are detected and removed from any application before the final release takes place.
  • Seeded bug: A known defect that was intentionally inserted by a tester to act as a trigger for other defects. Usually only used when there are too many unknowns or requirements are unclear.
  • Selectors: A selector is an attribute that indicates the location of a piece of data in a table. For example, if you have two tables with fields called ‘Name’ and ‘Address’, respectively, then each record (row) in one table will have its own unique identifier for the corresponding record in the other table.
  • Severity: The severity of a defect is determined by the impact it could have on users, and maybe either low (for something that causes no problems), medium (to cause some loss of service but with an alternative date solution), or high (the program component cannot perform). The severity is also determined by the time and cost required to fix the defect in question.
  • Showstopper: A defect in a program that prevents it from operating at all. A showstopper is so serious that no testing can be done till it is fixed.
  • Simulator: An application that mimics the behavior of another application so you can test, evaluate or experiment with it. The behavior in question may be that of a hardware device, operating system, or applications program.
  • Site acceptance testing (SAT): A type of testing that is performed after the system has been installed to ensure that it performs well under real-world conditions.
  • Smoke testing: A type of simple testing aimed at identifying basic flaws in a program. Smoke tests are used to weed out major defects, especially those that prevent the software from running at all. They are performed early in the development process on an incomplete system or one that has not been fully tested for defects.
  • Software development plan: A document that demonstrates how the software will be developed; it outlines the objectives and points out the overall approach for developing a particular product or component. A software development plan should also include schedules, resource estimates, methods, and procedures.
  • Software estimation: The art of estimating how long it will take to develop a piece of software.
  • Software requirements specification (SRS): A document that describes what the software should do or deliver, when it will be delivered, and how much it is expected to cost. This document outlines the business rules, user needs, and functional specifications for a particular project or program to be built by the development team(s).
  • Software Testing: The process of evaluating a software product against its requirements to determine the extent to which it meets those requirements and to detect defects.
  • Software Testing Life Cycle: The activities that take place during the testing process. A software testing life cycle involves analyzing a system, designing and implementing tests to evaluate how well it meets requirements, executing those tests, and finally reporting what happened (and drawing conclusions).
  • Software Testing Life Cycle Models: Three basic models used for planning software testing include top-down, bottom-up, and incremental. The top-down approach starts with a broad overview of the entire project and gradually narrows down test activities to specific subroutines in the program code. The bottom-up approach focuses on testing small individual modules of code first, then gradually adds more complex functionality until the whole system is tested. Based on how you define components of the system, incremental testing may include higher-level modules first and then progressively lower levels until all components are tested.
  • Software Testing Techniques: A variety of techniques can be used for software testing purposes- alpha testing works through the internal structure and logic of the program’s code, beta testing concentrates on interface error detection and system integration, and regression testing checks for possible changes to the program.
  • Source code: The computer instructions written in a programming language that enables the computer to understand what it is supposed to do. Source code can be compiled to turn it into executable program files and machine-readable object code, or it may be directly executed without compilation.
  • Specification: A document that defines a product or component. Specifications are usually written for technical products, but they can also be written for business processes and services. Software specifications describe how the software will work.
  • Specification testing: A type of testing done in preparation for system (acceptance) testing. It should be performed by people who are thoroughly familiar with the requirements.
  • Stage: An iteration of development divided into two or more parts. It is also called a stage in the software life cycle or a phase. A stage may focus on different aspects of system development, such as feasibility, design, and management.
  • Stakeholder: An individual who has an interest in your project – influence it. Some stakeholders are connected to the project by interest or responsibility while others may not be aware that they are connected at all.
  • State Transition Diagrams: A transition diagram is an organized list of states and the transitions between them. It depicts all possible state sequences through which a system can move.
  • State transition testing: State transition testing ensures that when a state (e.g., “processing data”) changes to another state (“data not valid”), the system doesn’t change to an invalid or unexpected state (e.g., “send error message”).
  • Statement Coverage: Statement coverage is a software testing technique that requires each line of code to be exercised at least once.
  • Statement Testing: Statement testing is a software testing technique that requires individual statements within the source code to be executed and tested.
  • Static Analysis: Static analysis is a type of quality assurance, program verification, and sometimes application security assessment tool used to identify defects in software source code or documentation. Unlike dynamic analysis, which runs a program to make sure it meets requirements, static analysis involves examining the code itself without actually running any program code.
  • Static Code Analysis: A type of quality assurance tool used to identify logical flaws and mistakes within application source code without having to run the associated program or test case.
  • Static Code Review: A technical review of a product’s source code without actually executing any program.
  • Status report: A report typically generated at the end of a Software Development Life Cycle (SDLC) phase or iteration that summarizes progress, work performed, open issues, and risks identified during the reporting period.
  • Storage Testing: Storage testing is a quality control technique used to evaluate the reliability and integrity of data storage media, such as disk drives, magnetic tapes, and solid-state memory devices. This type of software testing verifies that the hardware functions properly under various environmental conditions.
  • Stress Testing: Stress testing is a type of functional or load testing, which involves stressing the product under test with different levels of volume to evaluate how much stress it can support before performance or quality degrade beyond normal limits. The goal of stress testing is to determine the system’s breaking point to help identify potential problems before they occur.
  • Structural Testing: Structural testing is the second step in test design and builds on top of the process flow diagrams created in the Activity-on-Node (AON) technique. When using structural testing, you will create a separate structure chart that will define the software code in mathematical terms. You will use these mathematical descriptions to help you create test cases that check half of the requirements and ignore the other half.
  • Structured Testing: Structured testing is a software testing technique that consists of three steps:
    1) develop logical test cases,
    2) execute those test cases, and
    3) interpret and report the results.
  • Structured Walk-through: Structured walk-through is a type of formal software inspection in which the participants follow a prescriptive set of guidelines while inspecting the product under test (PUT). The idea behind this type of testing is to let more people participate in an inspection and have it be more effective, efficient, and consistent.
  • Stub: In object-oriented programming, a stub is an object that can be used in place of a real object to determine whether the system under test (SUT) behaves correctly when presented with various inputs. Stubs typically provide canned responses to method calls or return specific values based on the inputs being fed to them.
  • Supportability: Supportability refers to the ease at which a product, application, or service can be maintained.
  • Suspend Testing: Suspend testing is a software testing technique that allows the tester to stop executing the test cases or temporarily disable the execution of some test cases, without affecting other active test cases.
  • Sustained Testing: Sustained testing is an approach to software quality assurance (QA) in which QA professionals and testers are expected to identify a large number of bugs throughout the entire development lifecycle, and find a way to fix them. Testers who use this approach to testing concentrate on identifying every bug they can find in the codebase. Once they have been found, testers produce patches to eliminate these defects before releasing new versions of the software products.
  • System integration testing: System integration testing (SIT) is a black box software testing technique that uses the system requirements as test inputs and compares actual outputs to predicted results. This testing technique aims to ensure that each module of the system can communicate with all others and handle all necessary data.
  • System Testing: System testing is an application testing technique that requires all integrated and non-functional requirements to be tested with actual system components for the product under test (PUT) to be considered complete. One of the most important goals of this type of testing is to confirm that all specified requirements are actually present in the final product, as well as determine whether or not the product meets its performance objectives.

T

  • Test automation: Test automation is any mechanism that replaces the manual effort required to execute a test case. Test automation can range from simple software programs (such as a script or macro) designed to perform specific tasks, to advanced class libraries and frameworks for building reusable components.
  • Test basis: A test basis is a repeatable, measurable, and observable condition or set of conditions under which a component or system is tested.
  • Test Bed: A testbed is a set of tools, programs, and interfaces needed for the testing of a specific component or system. In some cases, the term testbed may be used synonymously with the test harness.
  • Test Case: A test case is an itemized list of instructions describing how to conduct one particular test or a group of tests. Test cases are used to configure the inputs, exercise the code under test, and evaluate and record the expected outputs for an automated software component.
  • Test case design technique: A test case design technique is an approach to designing test cases for a particular objective.
  • Test coverage: Test coverage can mean many different things: The amount of code that has been tested;  The percentage or ratio of all possible code paths through system code that have been exercised by one or more test cases;  A measure of the amount by which a program exceeds the requirements or specifications that define it.
  • Test Coverage Criteria: Test coverage criteria are specifications of how much of the system or product should be tested. Test coverage criteria consist of statements about what types, numbers, and percentages of features and bugs should be tested by different types and numbers of test cases.
  • Test Data: Test data are the values that a programmer uses for input into a component or system along with the expected results of the inputs and execution of code. In analytical testing, test data are often derived from requirements specification documents. However, in functional testing test data may be derived from the use cases, product description, and other system or product documentation.
  • Test-driven development: Test-driven development is a software development process that relies on the repetition of very short development cycles. In each cycle, a developer produces a small amount of code to pass one or more tests and then writes an automated test for the new code. The resulting automated suite of tests can be run as part of a regression test to help ensure that changes to the software do not cause any defects. These steps are repeated until all required functionality is developed and verified.
  • Test driver: A test driver is a software module or program that exercises and verifies the functionality of another piece of code or system.
  • Test environment: A test environment is an exact copy of the production environment where components or systems will ultimately be used by customers, but where no customer-facing activity takes place. This type of environment is useful for testing software products that will be delivered online where the test team does not have access to a live working copy of the production system.
  • Test Estimation: When testing is done, it’s important to create an estimation of the time and/or cost needed.
  • Test execution: The process of testing an application or system to identify any defects in it.
  • Test harness: A test harness is an integrated collection of programs and tools used to facilitate the running and monitoring of tests. Test harnesses are essential in automated testing methods such as functional, load, and performance testing.
  • Test level: The level of the test is the granularity at which tests are created and executed. This can range from very general such as functional testing to more specific ones like module-level testing or class-level testing.
  • Test levels: Different levels of testing exist in software development, each with its own objectives and goals. These include unit, integration.
  • Test log: The test log is an official part of the QA document describing what has been tested, who did it, and how.
  • Test Manager: A test manager is an individual responsible for the overall coordination of a given testing effort.
  • Test object: A test object is a part of the software or product that is being tested, for example, software classes, functions, and modules.
  • Test plan: A test plan is a formal description of the scope, approach, resources, and schedule of intended testing activities. A test plan can be either detailed or brief. It is used to communicate how the QA team intends to test a component or system. The test plan document should contain information on which product(s) will be tested, the objectives and exit criteria of the testing effort, identification of the verification techniques and tools that will be used, and the responsibilities of those involved.
  • Test policy: A test policy is a set of development guidelines and standards that an organization develops to define how it will handle testing-related issues and decisions.
  • Test process: The testing process involves various phases to get from requirements analysis over preparation and planning to test execution and result verification and reporting.
  • Test report: A test report is a document that describes the results of testing, including any defects found and their impact.
  • Test result: The result of a test is either positive or negative.  A positive result means that the expectation described in the test case was met; a negative result means that it was not met. Test cases whose results are determined to be inconclusive or not applicable are documented as such.
  • Test run: The process of executing a program, or a specific set of test cases, and recording the actual results experienced. A single execution of a test may be part of one or more test runs conducted over time.
  • Test scenario: A test scenario is a document that describes the pre-conditions for executing a test case, as well as the expected results.
  • Test script: A test script is a step-by-step document that describes what actions are to be taken and what results should be verified when performing a test or series of tests. Test scripts typically include specific inputs, execution conditions, expected results, and acceptance criteria.
  • Test Specification: A document that provides detailed information regarding how to execute one or more test cases for a given product under consideration for testing. Test specification documents typically include information on the scope, environment and preparation requirements, pre-requisites, and steps to follow for each test case.
  • Test strategy: A test strategy is an overall approach to testing a product or system. It defines how the software will be tested, what environment the testing will take place in, and which deliverables will result from the testing effort.
  • Test stub: A test stub is a piece of code or component that supplies pre-defined inputs and records outputs or other parameters from its associated system under test. A stub is typically used to reduce the amount of work required to set up a testing scenario for use during later, more in-depth testing phases.
  • Test suite: A test suite is a collection of tests that can be run together to investigate or verify the implementation of a specific feature or functionality. Test suites are often used to determine if all requirements have been met and if there are any gaps between what was specified and what has actually been implemented.
  • Test target: A test target is a piece of code being tested,  for example, one or more classes or functions in the software under test.
  • Test tool: A test tool is a utility used to help execute, manage or automate tests. Popular commercial tools include HP’s Quality Center.
  • Testability: Testability refers to the ease with which a system or component under test can be effectively tested. A system or component that is highly testable presents attributes and characteristics favorable to effective testing, while one that is not may require expensive, the time-consuming effort for it to be sufficiently tested.
  • Testware: Testware is a term used to describe software that can assist in the testing process. Testware tools and utilities are generally designed to help with test case creation, execution and management, defect tracking, logging of results and other output data from tests performed, as well as communications between testers and developers.
  • Third-party component: A third-party component is a software item that is not developed by the same organization as the product itself.
  • Top-down integration: The top-down integration approach begins with the highest (or most general) level of components in a software system and proceeds to lower levels until reaching the lowest (coarsest, or least detailed) level.
  • TPI (Test Process Improvement): TPI is a mature and disciplined approach to testing, assessment, measurement, process improvement, etc.
  • Traceability matrix: A traceability matrix is a document or table that diagrammatically depicts relationships between requirements and other project artifacts, including source code, test cases, and reports. The purpose of the matrix is to provide support for manual testing conducted after automated tests have been performed.  In one form or another, traceability matrices are a key component in most modern software development processes.
  • Traceability report: A traceability report is a document that provides detailed information regarding how requirements have been fulfilled within the implemented system or software, showing connections between implemented and non-implemented artifacts that may be required during testing. This report can also include specific types of defects found in system tests (defect tracking).

U

  • UML (Unified Modeling Language): A language used to define and design object-oriented applications; UML is organized around a set of notations, or diagrams, for visualizing and documenting the artifacts produced throughout the software development process.
  • Unit test framework: A unit test framework is a specialized software component that contains the mechanism and infrastructure for testing other code or components within an application. Unit test frameworks are often available as part of popular commercial development tools, such as Microsoft’s Visual Studio .NET, Rational Developer (IBM), and Eclipse (an open-source community project).
  • Unit testing: A method for testing individual software units, or modules. A series of tests are created to verify each module’s functionality and to determine whether the code meets specific quality standards such as high cohesion and low coupling.
  • Usability testing: Usability testing refers to any type of software testing that determines whether or not the users of a website, application, etc. can do what they want to accomplish quickly and with a minimum amount of effort.
  • Use case: A use case can be any part of a system, such as an application module or hardware device. It is the “thing” that the user does to get some useful work done. Use cases are often used during different phases of testing (functional testing, usability testing, and acceptance testing), requirements analysis, design, and project planning. The use cases are based on the requirements and/or user stories.
  • Use Case Testing: Use case testing is a form of black-box testing, where the tester does not need knowledge of the underlying code to test it. Instead, use case tests can be based on confirmation that the system implements a given set of use cases (or scenarios). For example, A business analyst develops use cases for an accounting system and a test analyst uses those use cases as the basis for a set of interaction-based functional tests.
  • User acceptance testing (UAT): A phase of testing performed by the end-users of a product to determine whether or not they accept the product’s performance based on what was agreed upon during project planning. Some organizations call this user verification testing (UVT).
  • User story: A user story is a description, written from the perspective of an end-user, of one or more features that will be included in a software product. User stories can vary from one to several sentences and are often created during the requirements analysis phase of the SDLC (software development process life cycle).
  • UUT (Unit Under Test): The UUT is a specific unit in the software system being tested or an alternate term for a component under test, that is not dependent on any other units contained within the SUT. In addition to this definition, there are two more. The unit under test is the primary entity or component being tested, as opposed to another device that may be connected to it. Also in some cases, a Unit Under Test can represent an overall area of functionality within the software system.

V

  • Validation: Validation is a way of ensuring that an application does what the users want it to do, and will not cause any problems when used as intended. Validation testing can be performed by either software testers or end-users.
  • Verification: Verification is a way of ensuring that an application does what the users expect it to do, and will not cause any problems when used as intended. Verification testing can be performed by either software testers or end-users.
  • Versioning: The process of creating more than one version of a product.
  • Virtualization: A computer program that allows a user to interact with an application or system, but without any hardware being present. For example, VMware and Microsoft Virtual PC allow users to run multiple operating systems on a single machine as if they were several separate computers.
  • V-model: In software engineering, the V-model is a way of describing how a system should be developed. In particular, it was created to help guide the acquisition and maintenance (or verification) of complex systems that are not strictly software but may have significant computing components.

W

  • Walkthrough: A face-to-face meeting in which requirements, designs, or code are presented to project team members for planning, or verifying understanding. The meetings can be held periodically (e.g., every two weeks) during development and testing activities.
  • Waterfall model: This is a sequential design model where software development is seen as progressing from one phase to the next. The waterfall model is considered old-fashioned because of its linear approach and serial nature, which limits its flexibility in accommodating changes after the fact.
  • Web Form: A text box or user control that accepts input from the user and shares its name with a database table field.
  • White box testing: White box testing is a black-box method that uses the knowledge of internal code structure to design tests. White box testers use logic, flowcharting, and/or analysis tools to derive test cases from the code itself rather than using test data supplied by domain experts.
  • White list: The term white list is used to refer to a set of acceptable values, individuals, or processes that are permitted access to a system.
  • Work Breakdown Structure: The work breakdown structure (WBS) is a deliverables-oriented grouping of project activities that organizes the planning efforts and helps to define responsibility for each effort.
  • Workaround: A workaround is an alternative that a programmer may create to overcome the need for proper documentation. Workarounds can be implemented as a temporary solution or may become part of the final product.
  • Workbench testing: Also known as integration testing, this tests how different parts of the system integrate. For example, workbench testing might ensure that the various files needed to run a program are compatible with each other. Also called integration testing or suite of integration tests.
  • WSDL (Web Service Description Language): A language used to provide a standard mechanism for web services to declare their capabilities and the protocols that they support.

I love open-source technologies and am very passionate about software development. I like to share my knowledge with others, especially on technology that's why I have given all the examples as simple as possible to understand for beginners. All the code posted on my blog is developed, compiled, and tested in my development environment. If you find any mistakes or bugs, Please drop an email to softwaretestingo.com@gmail.com, or You can join me on Linkedin.

2 thoughts on “Software Testing Terminology ! ISTQB Glossary”

Leave a Comment