Software Testing Terminology

Software Testing Terminology  Glossary or ISTQB Glossary: In Software testing, many testing vocabulary terms are available, and you have probably heard many unfamiliar ISTQB Glossary acronyms.

Those interested in increasing their Software Testing Terminology or testing vocabulary can follow this post.

Software Testing Terminology

We are trying to describe the definitions of software testing terminologies straightforwardly so that everyone can understand them easily. I hope this will help demystify some of the terms you hear and give a better idea of what software testers do.

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

A

  • Acceptance criteria: The specific conditions that a product must meet to satisfy the user’s or other stakeholder’s requirements.
  • Acceptance testing: This is to verify that a delivered product or service meets an organization’s acceptance criteria (or satisfies the users). The focus is on “customer” requirements, which may be defined by business analysts, managers, users, customers, and others.
  • Accessibility testing: Verifying that a product works for all audiences and excludes no one due to disability or hardware/software limitations.
  • Actual result: The actual outputs or data produced by a piece of software, be it the result of an expected operation or the effect of unexpected input.
  • Ad hoc testing: Formal testing is done without planning, documentation, or any formalized way of doing things; it is usually ad-hoc (“as you go”) because testing a new feature using only the old features can be difficult.
  • Agile development: An iterative software development method that emphasizes evolutionary, feedback-driven programming with rapid and continuous testing. This approach aims to make minor design improvements based on customer feedback as soon as possible so that major changes are made before the code becomes overly complicated.
  • Alpha testing: An initial test phase that is limited in scope, time, and several participants and focuses primarily on internal functionality. Developers or other development team members usually conduct it; anyone outside this group may be involved in beta testing.
  • Ambiguous requirement: With more than one possible interpretation, you have to figure out which was intended by consulting the original author/stakeholder or testing the feature.
  • Anomaly: Any deviation from the expected behavior of a program. Some anomalies indicate errors in the program, and others may be unexpected but correct behavior.
  • API (application program interface):  a set of routines, protocols, and tools for building application software.
  • Application crash: A program error that causes it to end abnormally.
  • As-Built: The final product produced by software developers may be different from “as designed” because of bugs, changes to scope, schedule slippage, etc.
  • As Designed: The current state of a program following development and testing. It is the actual working version, not the intended or planned one.
  • Assumption: A belief or condition upon which an argument, plan, or action is based.
  • Audit: An inspection or other method of determining whether a system’s security matches its policy.

B

  • Beta testing: A test phase usually conducted by primary users, customers, or other interested parties.
  • Big-bang integration: An integration strategy where newly developed source code is completely merged into the production environment simultaneously.
  • Black box testing: An approach that ignores the internal implementation details of a software product and instead tests the features from the outside as though they were “black boxes.” programs.
  • Blocker: Any bug that prevents a program or its parts from working, partially or entirely.
  • Bottom-up integration: is an integration strategy in which the individual components are tested separately and linked.
  • Boundary value analysis: A test control technique in which the software is exercised at its boundaries, or extremes, of input values to detect abnormal results.
  • Branch Coverage: The proportion of code executed by the test suite. Coverage should be as high as possible.
  • Branch version: A new program version created from the production code that has been amended to fix a specific problem or include new features. Depending on its nature and importance, this new version may be distributed internally or externally on another release cycle. If it is sold as an upgrade to licensed users, it should be numbered like the original version.
  • BS 7925-1: A British Standard specifies a software development process.
  • BS 7925-2: A British Standard specifies a software maintenance process.
  • Buddy system: A testing strategy in which two testers work together, taking turns at the computer and the documentation as they test the product.
  • Bug: Any software defect or flaw in a computer program that causes it to operate incorrectly. A bug differs from errors in design because bugs usually result from mistakes in coding rather than faulty logic within the software’s architecture.
  • Bug Bash: An event held by software developers to solve as many bugs as possible in a given time frame. Usually, it comes with rewards for success!
  • Bug leakage: An unintended “side effect” that results when correcting one bug introduces another malfunction to the program.
  • Bug tracking: A software tool, such as Jira or Bugzilla, used for recording defects and other issues in a program and their resolution.
  • Bug triage: A structured process for assessing and prioritizing defects, usually performed by Quality Assurance specialists during software development.
  • Build A collection of software modules, program files, and documentation derived from the source code developed by a specific development team to test or verify it at some point during its life cycle. In addition to compiled binary code, this could also include other deliverables such as white papers, design documents, or test plans.
  • Build automation: A software tool that automates compiling, assembling, and linking computer programs and libraries.
  • Bug Life Cycle: A defect goes through many stages. It starts when it is first identified and ends when it is resolved.
  • Bug life cycle process: A set of tasks that must be completed to resolve or close a defect in the software.
  • Bug release: A software release that contains bugs.
  • Bug report: A document describing one or more defects found in the software, usually by a tester or end-user through defect reports and/or testing. Also called a “defect report” or “problem report”.
  • Bug scrubbing: A software development technique that checks collected bugs for duplicates or non-valid bugs and resolves these before any new ones are entered.
  • Bug Triage Meeting: An event during software development led by a QA manager or test lead. This meeting prioritizes new defects found in production or testing environments. This meeting also re-prioritizes closed defects and performs a bug-scrubbing process. The bug triage meeting aims to organize bug defects in order of importance so the most severe bugs can be fixed first.

C

  • CAST: Chartered Association of Testing and Standardization
  • Categorizing bugs: Classify the defects according to their general nature, e.g., critical, major, etc. Critical bugs are those which have a direct impact on the software. Manual Testing is done to check these bugs.
  • Change request: A formal request for software change that has been approved by the sponsor.
  • Checking: The process of verifying, using a systematic procedure, whether or not an element of a design or product complies with its specifications and is fit for purpose.
  • Checklist: A list of features and functions must be tested before the software is accepted. Checklists are usually derived from a user requirements document or an older version of the software specification.
  • Checkpoint: Points in the project where a snapshot of the software can be taken for future evaluation. Checkpoints are usually scheduled to evaluate whether certain targets, such as quality or productivity goals, have been met.
  • Client: An end-user of a system/application under development.
  • Code Coverage: Measures the amount of code exercised by a test suite.
  • Code Freeze: The code freeze is when code changes stop being merged into the code base for a particular release. In the past, it has been done for various reasons, but most often to release a new version of an operating system. It is generally recommended to freeze as soon as possible so there is no bug fixing on the feature branch before the new release hits the market and becomes widely used.
  • Code reviews: An informal meeting at which the design and code of the software are inspected by a programmer and/or other peers. The purpose of this meeting is to examine code quality and detect errors, and eliminate or reduce the number of defects before the software is released.
  • Code standard: The set of rules or guidelines that a programmer must adhere to when programming.
  • Code walkthrough: A software development technique in which programmers meet to examine code line by line, discuss the rules and their application, and resolve any issues regarding coding standards. This meeting is usually held among programmers to discuss the functionality of certain parts of code.
  • Cohesion: The degree to which the elements within a software component belong together. Software components are more cohesive if they work towards a common goal. Components with low cohesion are more difficult to understand.
  • Comparison testing: An informal test determines a new feature’s value for an existing feature.
  • Compatibility testing: Ensuring that software will work with other systems or a specific platform. Compatibility testing is usually conducted manually by performing tests on the software using different computer platforms or automatically by simulating and running tests in various environments.
  • Compile-time: Time measured from the beginning to the end of a build.
  • Complexity: The inherent difficulty of an application, system, or problem. Complexity is distinct from other factors, such as usability or performance. When a software project approaches the limits of complexity without adding more resources to help manage it, it can start to produce errors or undesirable results. Complexity is a characteristic of solving the problem, not just the software trying to solve it.
  • Compliance: The degree to which software testing complies with the standards of the industry.
  • Component: A component is usually self-contained and sufficiently independent of other components that can be developed, tested, and maintained separately from the other parts of a system or application. Components are meant to decrease complexity by allowing functionality to be separated into logical units. A component may be a full-fledged service, or it can be an abstract concept, such as a set of functions that share some common structure and behavior.
  • Component integration testing: A type of system testing which determines whether all components interact correctly when used together during normal operations. This type of testing is usually performed late in the development process after all components have been successfully coded and tested individually, but before integration testing.
  • Condition coverage: A measure that defines the degree to which test cases identify (cover) all possible paths through a program. Ideally, for every branch, there should be one and only one test case covering the branch.
  • Conditional Sign-off: The approval of a software release that is conditional upon receipt and acceptance of additional deliverables.
  • Configuration testing: A test is conducted to ensure that the configuration of a software product or element has not been modified inadvertently since it was last tested. Such testing results are often documented in a Configuration Baseline Report (CBR).
  • Context-driven testing: A testing method that relies on domain knowledge and heuristics to derive test cases.
  • Continuous integration (CI): A system used to automate the process of building, testing, and releasing an application. Existing development tools such as CruiseControl build a project’s source code and execute its tests whenever a change has been committed. The most significant advantage of Continuous Integration is the ability to identify integration problems early, which reduces the overall cost of fixing them.
  • Cornerstone: A test case used as a template for test case development in other areas of testing, such as boundary value analysis or error seeding.
  • Corrective actions: The action to be taken when a bug is found during testing.
  • Cost-benefit analysis: The process of determining whether the cost of a testing effort will equal or exceed its benefits.
  • Coverage: The degree to which a test (or set of tests) exercises the code being tested. Several methods of measuring this include statement coverage, branch coverage, and condition coverage.
  • Crash: Testing aimed at determining and exploiting the failure modes of an application’s processes to invoke unhandled exceptions or otherwise cause the program to crash as early detection of potential failures.
  • Criteria: A set of rules that are applied to the testing results obtained using one technique to determine whether they conform or not concerning the objectives established for a given test.
  • Criticality: The importance of a requirement (or requirement set) concerning meeting the objectives set forth for a testing project. Criticality is commonly expressed as high, medium, or low.
  • Criticality of testing: The importance of a test case for meeting the objectives set forth for a testing project. Criticality is commonly expressed as high, medium, or low.
  • Cross-browser testing: Testing a website from one or more browser platforms.
  • Crowd testing: An outsourcing model where a company provides access to their own products and services for free to a community of voluntary testers who are incentivized by being able to test the product as they would normally do but with additional benefits including giving public feedback on social media outlets such as Twitter and Facebook.
  • Customer Acceptance Testing (CAT): Testing conducted by the end-users/customers that determines whether they accept the software as meeting their needs and satisfying stated requirements. Cat focuses on how well the users can work with the software. Note that CAT should not be confused with beta testing done by external testers and customers.
  • Cycle time: The duration it takes to complete each iteration or sprint in an Agile project, usually measured in days or weeks. This is a key metric because shorter cycle times mean more opportunities for teams to inspect and adapt.

D

  • Deadlock is when two or more threads wait indefinitely for an event to occur.
  • Debugging: A software development activity that aims to remove or correct errors in a program.
  • Decision table: A decision table is a concise tabulation of all possible combinations of input values and their corresponding outputs, both true and false.
  • Decision Table testing: A test case design technique in which logic conditions are used to select one of a set of actions.
  • Defect: An error or flaw in an existing product that causes it not to perform its intended function, cause incorrect results, or otherwise behave unexpectedly.
  • Defect report: A document that records and communicates information about a product or system deficiency, sometimes also defining the nature and cause of the problem.
  • Defect tracking system: A software tool used to track defects throughout their life cycle in such a way that they can be easily retrieved for future reference.
  • Deliverable: An objective artifact produced during an activity or phase in the software development process. For example, a test case is a deliverable from a test case design activity.
  • Dependency: Any reference within one product to another for its proper execution and successful completion. In software, it usually refers to a requirement upon another module or program that must be satisfied before the given module or program can function correctly.
  • Document review: A test review technique that provides for the systematic examination of a document against its requirements or using some other objective standard. Each requirement is reviewed by one or more reviewers who consider it from two perspectives:– Did the author correctly understand and apply the requirements?
    – Was the document written by procedures, standards, style guides, etc.?
  • Domain Expert: An individual who is knowledgeable and experienced in a particular application area. Such individuals may provide information on the specific requirements for a given component or system; they may also be asked to participate in the testing process, either by serving as product testers themselves or by providing written feedback on test design techniques and results.
  • Downtime: The period when a computer or computer system is not operating correctly.
  • Driver: A test case that controls the behavior of another test object. The driver provides input data and/or another stimulus to the controlled object such as a program module or hardware component under test.
  • Dry-run analysis: A test case design technique in which risk factors are identified and addressed by generating test cases that cover all portions of the program that might be affected.

E

  • Edge Case: A test case designed to exercise code handling a system’s exceptional or “fringe” conditions.
  • Effort (Test): The amount of work required to perform some action; the level of effort needed for a given level of performance may be expressed in terms of time, personnel, and other resources necessary to accomplish the task.
  • Emulator: A hardware or software system that duplicates the functionality of another system.
  • End-to-end test: A test to verify the operation of a complete application system by simulating the real-world environment in which it will (ideally) operate.
  • Entry criteria: An item’s characteristics must be possessed to be accepted for further consideration (“to pass the sniff test”).
  • Equivalence partitioning (EP): Partitioning the input domain of a function, component, or system under test into class inputs that produce the same output (equivalence classes).
  • Error: An action or process that produces an effect different from the one expected.
  • Error guessing: A test design technique in which the tester manually searches for error-causing inputs, using program knowledge and heuristics to surmise likely candidates.
  • Escape analysis: A Control flow and Data flow testing technique that uses path coverage metrics to measure the completeness of test cases.
  • Estimate: The degree or amount of anything abstracted or inferred; a guess, usually based on incomplete data.
  • Execution path: The sequence or series of steps a computer program follows from start to finish as it runs.
  • Exhaustive testing: A test strategy that involves executing every possible path through a program or system to verify correct operation.
  • Exit criteria: A set of conditions must be fulfilled before a test case is considered complete.
  • Expected result: The result that is expected under the normal conditions of the test case. Also called ‘expected value’ or ‘desired value’.
  • Expertise: A form of knowledge acquired, built up, or learned over time through experience and training.
  • Explicit testing: Testing that relies on formal procedures for the setup, control, and execution of tests.
  • Exploratory testing: Exploratory testing is a methodical approach to software testing where the tester, with the help of test design techniques like divide and conquer, moves down into the code based on some clues from a higher level. Exploratory testing can be considered a form of structured or unstructured testing. This is an ad-hoc process where the tester investigates the application manually, using their knowledge, skills, and experience to determine the most promising areas for testing.

F

  • Failover: A mechanism that uses two or more systems to provide fault tolerance in case of a system failure. In many cases, one system is active, and the backup is inactive and idle until needed.
  • Failure: A condition or event that causes a program to terminate abnormally or produce incorrect results.
  • Fault: A discrepancy (incorrectness or incompleteness) between a planned or expected condition and an actual occurrence, such as failure of equipment, resources, or people to perform as intended.
  • Feature: A distinct capability of a software product that provides value in a given business process.
  • Feature test: A black-box test design technique used to control software changes by examining how they affect existing features and other behaviors.
  • Formal review: Testing performed by a group of people independent of the document’s author or program under test.
  • Functional testing: Testing that verifies and validates an application’s functionality (functionality).

G

  • Gamma test:  A black box test design technique for helping to design better requirements and specifications – compared with traditional techniques that concentrate on the surface syntax of requirements documents.
  • Gherkin: A specification language for describing the expected behavior of the software.
  • Glass box testing: Testing that examines program internal structures and processes to detect errors and determine when, where, why, and how they occurred.
  • Goal: A description of the expected outcome(s) when the program under test is executed.
  • Graphical User Interface: An interface that uses graphics to communicate instructions and information to the computer user.
  • Grey-box testing: A combination of white-box and black-box test design techniques that enables a tester to determine the internal structures, processing, inputs, and outputs of a program while examining how the inputs affect the outputs.
  • GUI Testing: Testing that verifies the functionality of a Graphical User Interface.

H

  • Hardening: Synonymous with ‘testing.’ Finding and removing as many errors as possible is usually carried out when moving from an alpha to a beta product release level (or for customer acceptance testing). Hardware is not often used to support this activity, but the testing process is similar to software.
  • Hotfix: A software change that is applied (often at the user site) to address a problem in an operational program.
  • Human error: A fault or failure resulting from an incorrect application of information, lack of appropriate knowledge, training, and skill on the part of personnel, misuse of equipment, improper installation, operation, maintenance, carelessness, or negligence.
  • Hybrid test design technique: A black box test design technique that uses structured and ad hoc techniques.

I

  • IEEE: 829-1998, Standard for Software Test Documentation- A publication of the Institute of Electrical and Electronics Engineers (IEEE) that guides creating a software testing plan and test design documentation.
  • Impact analysis: A process used to gauge the effect of a defect during testing. The objective is to determine how serious the error could be and whether it should be considered a show-stopper, blocking further testing.
  • Inbound inspection: Inspecting a program newly received from outside your organization to find major (possibly showstopper) bugs before it is processed any further.
  • Incident: A problem that typically causes the system to go out of control. An incident may require corrective action or be corrected due to the identification and correction of other incidents.
  • Incident report: A document submitted to management for each failure during testing. The incident report should include a problem summary, an analysis of its cause, and recommended corrective action. It is usually emailed to the appropriate manager or project leader, who will enter it into the defect/incident database for further processing.
  • Informal review: An informal inspection of documents typically involving two or three persons, intended to check that they make sense and conform to agreed standards and procedures.
  • Inspection: A form of peer review in which the object is to find defects, as opposed to providing constructive comments.
  • Install / Uninstall testing: A form of white box testing that checks the software is correctly installed on a computer or system.
  • Installation test: A software product test to verify that it is installed correctly.
  • Integration testing: A black box test design technique integrating independently developed component programs into larger assemblies before system testing. If the component programs are properly integrated, they should work collectively as a single unit.
  • Issue: A problem or defect with a software product. An issue can be based on functionality, performance, usability, or compliance with standards.
  • International Software Testing Qualifications Board (ISTQB):  The International Software Testing Qualification Board. An organization that offers certifications in several areas of software testing, including test management and test automation. It also specifies a common set of skills and knowledge for each certification level.
  • Iteration/sprint: In agile methodologies for the development of software products, one iteration is an interval of time during which the team works on the full development cycle, from requirements to testing and back to development.
  • Iterative development: The process of developing a system or component in stages, with each stage building on previous ones. Iterative development allows changes and refinements throughout the project lifecycle.

J

  • JUnit: An open-source unit testing framework for the Java programming language. It provides an in-memory test environment to eliminate external dependencies and is used for manual and automated tests.

K

  • Kanban: A method for managing projects and workflow. Each project or task is represented as a card that is moved through columns, with the progress being tracked by an electronic board.
  • Kick-off meeting: A meeting held at the start of the project to determine goals and objectives for testers on the project. Sprints should also have one at the start of each sprint. All participants need to be present as it can be used to create a project schedule, receive updates from team members on progress and it also serves as a status report for upper management.

L

  • Load testing: A test method that uses the execution of a program with a heavy workload to verify whether or not it can handle the specified volumes.
  • Look and feel The overall appearance of a software product, which includes elements such as layout and design.

M

  • Maintainability: The ability of a software product to be modified and enhanced by adding new features or fixing problems.
  • Maintenance: The work involved keeping a software product up to date with the latest bug fixes and enhancements after its initial development.
  • Manual testing is a test method in which special input data forms are used to change a software product’s behavior unexpectedly.
  • Modular testing: Testing is based around the building blocks or modules that make up a software product, which is tested individually before being integrated into full system tests.
  • Module testing: A test method where the software product is tested as a single section in isolation from other sections of the application.

N

  • Naming standard: A standard the development team uses to ensure consistency when naming variables or objects.
  • Negative testing: A test method that looks for ways of breaking the software product. For example, a web page is tested with random values entered into each field to ensure that errors are not produced.
  • New feature: An enhancement to a software product that has not previously been implemented.
  • New requirement: A change in the requirements for a software product that is made after it is completed.
  • Non-functional testing: Testing that focuses on testing the quality of a software product. This includes usability, performance, and security tests, which are carried out by testers and developers.
  • NUnit: An open-source, unit testing framework for the .NET platform.

O

  • Open source: An open code development method that means anyone can access the source code of software products and improve them.
  • Operational testing: A test performed on a software product to make sure it is suitable for a production environment.
  • Outcome: An outcome of a test is the result or reason for which it was run. For example, if a software product fails validation testing, the outcome is that functionality is missing and needs to be added.
  • Out-of-Scope: A defect found by a tester that is not in the original list of defects but should be added to it.
  • Outsourcing: Moving work out of your business into other businesses or third parties.

P

  • P0 (Product owner): A role on the Scrum team that owns the product and is responsible for managing its development.
  • Pair programming is a technique whereby two developers work on the same piece of code simultaneously.
  • Pair testing: A test method where two individuals, usually a developer and tester, take it in turns to run tests or run the same test together.
  • Parallel testing: Parallel testing involves running tests simultaneously, usually in different environments or on different computers. This allows defects to be identified faster and gives you a higher chance of finding them before release.
  • Path Coverage: A method of testing that looks at all the different paths through a software product to ensure that the code has been thoroughly tested.
  • Peer review: A method of quality assurance where developers test the work of other developers.
  • Performance: An aspect of the quality of a software product. Performance testing ensures that the performance of a software product meets customers’ requirements.
  • Performance Testing: Performance testing focuses on measuring how well a software product performs in a production environment rather than developing and testing the product’s functionality. The QA team usually conducts it after Alpha testing has been completed.
  • POC: POC stands for Proof of Concept. This is a quick prototype to determine the feasibility or suitability of the product. A POC allows you to test your idea, concept, solution, or code quickly and inexpensively before making any major changes or investments.
  • Positive testing: Positive testing verifies that a software product works by running tests.
  • Prerequisites: A set of actions or conditions that must be satisfied before a test can run. For example, a tester might add some steps to the beginning of every test case as preconditions to ensure that the test is run under the right conditions.
  • Priority: A rating given to each defect by the tester, usually based on how important the tester believes it is but also based on information in the bug report or defect description.
  • Production: The use of a software product in any environment other than within the development team, for example, in-house or in production.

Q

  • QA Consultant: A person who provides advice and expertise for quality assurance.
  • QA Engineer: A specialist engineer in the field of QA. The role may involve testing, developing, or managing projects, depending on experience and ambition.
  • Quality Assurance (QA): Testing by a third party to ensure that products or services follow defined standards and procedures, as well as identify and correct any defects
  • Quality Control (QC): The process of checking the job has been done correctly. QC is an integral part of QA, but QA involves a wider range of activities to ensure that products or services meet defined standards and procedures.
  • Quality metrics: A measurable quantity showing how good a product or service is.

R

  • Random Testing: A test strategy where the input values for the test are chosen randomly from a range or set of possible values to get a broader testing coverage.
  • Recovery testing: An activity that attempts to uncover any weaknesses in a fail-safe system that, if activated, will result in the program or system continuing its operations without losing data or functionality.
  • Regression Testing: A regression test ensures that errors from previous releases or builds have not crept into this release or build. It can also be used to validate new features, i.e., confirming that the new feature has not affected any other functions within the product.
  • Release: A version of a software product that has been fully tested and is ready to be released to customers.
  • Release Note: The document that goes with a release to explain what is in the release and how it will be supported.
  • Release testing: Testing which looks at the release criteria, plus any other areas that might require extra scrutiny.
  • Reliability: The ability of a product or service to perform satisfactorily and dependably under stated conditions for a specified period of time.
  • Requirements: The functions or qualities, sometimes in great detail, that are expected of a software product. Requirements can be documented either formally or informally and may originate from customers, users, or developers.
  • Requirements specification: A document defining what a product or service is expected to do, including functional and non-functional requirements, specifications, and acceptance criteria.
  • Retest: To repeat testing after software changes have been made to ensure that the product still meets requirements.
  • Re-testing: Re-testing is testing a software product after changes or fixes have been made to the code. This ensures that any new errors haven’t been introduced by the changes.
  • Retrospective meeting: A meeting, after the main part of a software development project, is completed, where the team meets to discuss what went well during the project and how this may be improved in the future. Also known as a post mortem or lessons learned meeting.
  • Reusability: The capability that a system or module can be used as part of another system.
  • Review: An inspection of a finished work product, usually to verify it meets certain criteria. In software testing, reviews may take place before or after the test to ensure that the test meets certain criteria and is fit for purpose.
  • Review meeting: A follow-up meeting is held between individuals or groups after review activities have taken place. It is used as an opportunity for reflection, learning, refinement, and making improvements in processes, tools, methods, etc.
  • Reviewer: A person who looks at or reviews all or part of a document and provides feedback to the author.
  • Root cause analysis: The process of investigating the underlying causes of a particular problem. or you can say, An analysis of why a particular defect occurred. This will involve discussing possible causes with all development team members, including developers and testers, before deciding what action should be taken to prevent it from happening again.

S

  • Sandwich integration: An integration testing strategy that begins with the end-user, moves to the component test, and then on to system testing.
  • Sanity testing: A testing process to ensure that, at the outset of software development, the product works. It’s often used by managers or customers when they believe there is little risk of introducing major defects into a program, and it can reduce the time needed for later, more thorough testing.
  • Scalability: The ability of a software product to run on larger hardware.
  • Scalability testing: A type of testing that ensures the application performs at its best when faced with an increasing workload as measured by the number or size of transactions, processor usage, etc.
  • Scenario: A description of how things could work in the future. This is often used as the basis for wish lists and functional requirements documents, but can also be used when people are trying to describe what something should do (rather than merely describing how it does work).
  • Scope of Testing: Testing is done to ensure that products are fit for their intended purpose, but what the product is and how well it does its job can be very different things. Testing can also be done to find out more about a product—to see if it does what you think it will or if other features might help solve your problem.
  • Script: A list of commands that can be used to control the execution of the program being tested.
  • Scrum (a framework for agile development): The Scrum framework is one of several agile product development processes. It can be used to develop anything from small pieces of software to large systems.
  • Scrum (development framework): Scrum is a simple yet powerful process for managing and completing even the most difficult project. It works equally well for small projects and large, complex programs. The word “scrum” comes from rugby, where it refers to a formation used when restarting play.
  • Scrum Master: A role in Scrum teams responsible for ensuring the team remains focused on achieving the sprint goal and that daily standup meetings take place.
  • SDLC (Software Development Lifecycle): The software development lifecycle is a model or framework used to plan, document, and control the IT system development process.
  • Security testing: The part of the software development process that determines how well a system or program adheres to security policies. The main purpose of security testing is to ensure that security threats are detected and removed from any application before the final release occurs.
  • Severity: The severity of a defect is determined by the impact it could have on users, and maybe either low (for something that causes no problems), medium (to cause some loss of service but with an alternative date solution), or high (the program component cannot perform). The severity is also determined by the time and cost required to fix the defect in question.
  • Showstopper: A defect in a program that prevents it from operating at all. A showstopper is so serious that no testing can be done till it is fixed.
  • Simulator: An application mimics another application’s behavior so you can test, evaluate, or experiment with it. The behavior in question may be that of a hardware device, operating system, or application program.
  • Smoke testing: A type of simple testing aimed at identifying basic flaws in a program. Smoke tests are used to weed out major defects, especially those that prevent the software from running at all. They are performed early in the development process on an incomplete system or one not fully tested for defects.
  • Software requirements specification (SRS): A document that describes what the software should do or deliver, when it will be delivered, and how much it is expected to cost. This document outlines the business rules, user needs, and functional specifications for a particular project or program to be built by the development team(s).
  • Software Testing: The process of evaluating a software product against its requirements to determine the extent to which it meets those requirements and to detect defects.
  • Software Testing Life Cycle: The activities that take place during the testing process. A software testing life cycle involves analyzing a system, designing and implementing tests to evaluate how well it meets requirements, executing those tests, and finally reporting what happened (and drawing conclusions).
  • Software Testing Life Cycle Models: The three basic models used for planning software testing are top-down, bottom-up, and incremental. The top-down approach starts with a broad overview of the entire project and gradually narrows test activities to specific subroutines in the program code. The bottom-up approach focuses on testing small individual code modules first, then gradually adding more complex functionality until the whole system is tested. Based on how you define system components, incremental testing may include higher-level modules first and then progressively lower levels until all components are tested.
  • Software Testing Techniques: A variety of techniques can be used for software testing purposes- alpha testing works through the internal structure and logic of the program’s code, beta testing concentrates on interface error detection and system integration, and regression testing checks for possible changes to the program.
  • Source code: The computer instructions written in a programming language that enables the computer to understand what it is supposed to do. Source code can be compiled to turn it into executable program files and machine-readable object code, or it may be directly executed without compilation.
  • Specification: A document that defines a product or component. Specifications are usually written for technical products but can also be written for business processes and services. Software specifications describe how the software will work.
  • Stakeholder: An individual who has an interest in your project – influence it. Some stakeholders are connected to the project by interest or responsibility while others may not be aware that they are connected at all.
  • State Transition Diagrams: A transition diagram is an organized list of states and the transitions between them. It depicts all possible state sequences through which a system can move.
  • Statement Coverage: Statement coverage is a software testing technique that requires each line of code to be exercised at least once.
  • Statement Testing: Statement testing is a software testing technique that requires individual statements within the source code to be executed and tested.
  • Static Analysis: Static analysis is a type of quality assurance, program verification, and sometimes application security assessment tool used to identify software source code or documentation defects. Unlike dynamic analysis, which runs a program to ensure it meets requirements, static analysis involves examining the code itself without running any program code.
  • Static Code Analysis: A quality assurance tool used to identify logical flaws and mistakes within application source code without running the associated program or test case.
  • Static Code Review: A technical review of a product’s source code without executing any program.
  • Status report: A report typically generated at the end of a Software Development Life Cycle (SDLC) phase or iteration summarizes progress, work performed, open issues, and risks identified during the reporting period.
  • Stress Testing: Stress testing is functional or load testing that involves stressing the product under test with different volume levels to evaluate how much stress it can support before performance or quality degrades beyond normal limits. The goal of stress testing is to determine the system’s breaking point to help identify potential problems before they occur.
  • Structural Testing: Structural testing is the second step in test design and builds on top of the process flow diagrams created in the Activity-on-Node (AON) technique. When using structural testing, you will create a separate structure chart that will define the software code in mathematical terms. You will use these mathematical descriptions to help you create test cases that check half of the requirements and ignore the other half.
  • Structured Testing: Structured testing is a software testing technique that consists of three steps:
    1) develop logical test cases,
    2) execute those test cases, and
    3) interpret and report the results.
  • Structured Walk-through: Structured walk-through is a type of formal software inspection in which the participants follow a prescriptive set of guidelines while inspecting the product under test (PUT). The idea behind this type of testing is to let more people participate in an inspection and have it be more effective, efficient, and consistent.
  • Stub: In object-oriented programming, a stub is an object that can be used in place of a real object to determine whether the system under test (SUT) behaves correctly when presented with various inputs. Stubs typically provide canned responses to method calls or return specific values based on the fed inputs.
  • System Testing: System testing is an application testing technique that requires all integrated and non-functional requirements to be tested with actual system components for the product under test (PUT) to be considered complete. One of the most important goals of this type of testing is to confirm that all specified requirements are present in the final product and to determine whether the product meets its performance objectives.

T

  • Test automation: Test automation is any mechanism that replaces the manual effort required to execute a test case. Test automation can range from simple software programs (such as a script or macro) designed to perform specific tasks to advanced class libraries and frameworks for building reusable components.
  • Test basis: A test basis is a repeatable, measurable, and observable condition or set of conditions under which a component or system is tested.
  • Test Bed: A testbed is a set of tools, programs, and interfaces needed to test a specific component or system. Sometimes, the term testbed may be used synonymously with the test harness.
  • Test Case: A test case is an itemized list of instructions describing how to conduct one particular test or a group of tests. Test cases are used to configure the inputs, exercise the code under test, and evaluate and record the expected outputs for an automated software component.
  • Test case design technique: A test case design technique is an approach to designing test cases for a particular objective.
  • Test coverage: Test coverage can mean many different things: The amount of code that has been tested;  The percentage or ratio of all possible code paths through system code that one or more test cases have exercised;  A measure of the amount by which a program exceeds the requirements or specifications that define it.
  • Test Coverage Criteria: Test coverage criteria are specifications of how much of the system or product should be tested. Test coverage criteria consist of statements about what types, numbers, and percentages of features and bugs should be tested by different types and numbers of test cases.
  • Test Data: Test data are the values a programmer uses for input into a component or system, along with the expected results of the inputs and execution of code. In analytical testing, test data are often derived from requirements specification documents. However, in functional testing, test data may be derived from the use cases, product descriptions, and other system or product documentation.
  • Test-driven development: Test-driven development is a software development process that repeats very short development cycles. In each cycle, a developer produces a small amount of code to pass one or more tests and then writes an automated test for the new code. The resulting automated suite of tests can be run as part of a regression test to help ensure that changes to the software do not cause any defects. These steps are repeated until all required functionality is developed and verified.
  • Test driver: A test driver is a software module or program that exercises and verifies the functionality of another piece of code or system.
  • Test environment: A test environment is an exact copy of the production environment where customers will ultimately use components or systems but where no customer-facing activity occurs. This environment is useful for testing software products delivered online where the test team cannot access a live working copy of the production system.
  • Test Estimation: When testing is done, estimating the time and cost needed is important.
  • Test execution: The process of testing an application or system to identify any defects.
  • Test harness: A test harness is an integrated collection of programs and tools used to facilitate the running and monitoring of tests. Test harnesses are essential in automated testing methods such as functional, load, and performance testing.
  • Test level: The level of the test is the granularity at which tests are created and executed. This can range from very general, such as functional testing, to more specific like module- or class-level testing.
  • Test levels: Different levels of testing exist in software development, each with its objectives and goals. These include unit integration.
  • Test log: The test log is an official part of the QA document describing what has been tested, who did it, and how.
  • Test Manager: A test manager is an individual responsible for the overall coordination of a given testing effort.
  • Test object: A test object is a part of the software or product that is being tested, for example, software classes, functions, and modules.
  • Test plan: A test plan is a formal description of the scope, approach, resources, and schedule of intended testing activities. A test plan can be either detailed or brief. It communicates how the QA team intends to test a component or system. The test plan document should contain information on which product(s) will be tested, the objectives and exit criteria of the testing effort, identification of the verification techniques and tools used, and the responsibilities of those involved.
  • Test report: A test report is a document that describes the results of testing, including any defects found and their impact.
  • Test result: The result of a test is either positive or negative.  A positive result means that the expectation described in the test case was met; a negative result means it was not. Test cases whose results are determined to be inconclusive or not applicable are documented as such.
  • Test run: The process of executing a program or a specific set of test cases and recording the actual results experienced. A single execution of a test may be part of one or more test runs conducted over time.
  • Test scenario: A test scenario is a document that describes the pre-conditions for executing a test case, as well as the expected results.
  • Test script: A test script is a step-by-step document that describes what actions are to be taken and what results should be verified when performing a test or series of tests. Test scripts typically include specific inputs, execution conditions, expected results, and acceptance criteria.
  • Test Specification: A document that provides detailed information regarding how to execute one or more test cases for a given product under consideration for testing. Test specification documents typically include information on the scope, environment, preparation requirements, prerequisites, and steps to follow for each test case.
  • Test strategy: A test strategy is an overall approach to testing a product or system. It defines how the software will be tested, what environment the testing will take place in, and which deliverables will result from the testing effort.
  • Test stub: A test stub is a piece of code or component that supplies pre-defined inputs and records outputs or other parameters from its associated system under test. A stub is typically used to reduce the amount of work required to set up a testing scenario for use during later, more in-depth testing phases.
  • Test suite: A test suite is a collection of tests that can be run together to investigate or verify the implementation of a specific feature or functionality. Test suites are often used to determine if all requirements have been met and if gaps exist between what was specified and implemented.
  • Traceability matrix: A traceability matrix is a document or table diagrammatically depicting relationships between requirements and other project artifacts, including source code, test cases, and reports. The purpose of the matrix is to provide support for manual testing conducted after automated tests have been performed.  In one form or another, traceability matrices are a key component in most modern software development processes.

U

  • Unit testing: A method for testing individual software units or modules. Tests are created to verify each module’s functionality and determine whether the code meets specific quality standards such as high cohesion and low coupling.
  • Usability testing: Usability testing refers to any software testing that determines whether or not the users of a website, application, etc., can do what they want to accomplish quickly and with a minimum amount of effort.
  • Use case: A use case can be any system part, such as an application module or hardware device. It is the “thing” that the user does to get some useful work done. Use cases are often used during different phases of testing (functional testing, usability testing, and acceptance testing), requirements analysis, design, and project planning. The use cases are based on the requirements and user stories.
  • User acceptance testing (UAT): A phase of testing performed by the end-users of a product to determine whether or not they accept the product’s performance based on what was agreed upon during project planning. Some organizations call this user verification testing (UVT).
  • User story: A user story is a description, written from the perspective of an end-user, of one or more features that will be included in a software product. User stories can vary from one to several sentences and are often created during the requirements analysis phase of the SDLC (software development process life cycle).

V

  • Validation: Validation ensures that an application does what the users want it to do and will not cause any problems when used as intended. Either software testers or end-users can perform validation testing.
  • Verification: Verification ensures that an application does what the users expect it to do and will not cause any problems when used as intended. Either software testers or end-users can perform verification testing.
  • V-model: The V-model describes how a system should be developed in software engineering. In particular, it was created to help guide the acquisition and maintenance (or verification) of complex systems that are not strictly software but may have significant computing components.

W

  • Walkthrough: A face-to-face meeting in which requirements, designs, or code are presented to project team members for planning or verifying understanding. During development and testing activities, the meetings can be held periodically (e.g., every two weeks).
  • Waterfall model: This is a sequential design model where software development is seen as progressing from one phase to the next. The waterfall model is considered old-fashioned because of its linear approach and serial nature, which limits its flexibility in accommodating changes after the fact.
  • White box testing: White box testing is a black box method that uses the knowledge of internal code structure to design tests. White box testers use logic, flowcharting, and analysis tools to derive test cases from the code rather than using test data supplied by domain experts.
  • Workaround: A workaround is an alternative a programmer may create to overcome the need for proper documentation. Workarounds can be implemented as a temporary solution or may become part of the final product.
  • WSDL (Web Service Description Language): A language used to provide a standard mechanism for web services to declare their capabilities and the protocols that they support.

Avatar for Softwaretestingo Editorial Board

I love open-source technologies and am very passionate about software development. I like to share my knowledge with others, especially on technology that's why I have given all the examples as simple as possible to understand for beginners. All the code posted on my blog is developed, compiled, and tested in my development environment. If you find any mistakes or bugs, Please drop an email to softwaretestingo.com@gmail.com, or You can join me on Linkedin.

2 thoughts on “Software Testing Terminology”

Leave a Comment