ISTQB MCQs

ISTQB MCQs

These ISTQB multiple-choice questions and their answers will help you strengthen your grip on the subject of ISTQB. You can prepare for an upcoming exam or job interview with these 30 ISTQB MCQs.
So scroll down and start answering.

1: The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.

A.   Path coverage

B.   Decision table testing

C.   Test case suite

D.   White-box testing

2: Separation of responsibilities - which encourages the accomplishment of objective testing. [After DO-178b]

A.   Code

B.   Interface testing

C.   Independence of testing

D.   Co-existence

3: The individual element to be tested. There usually is one test object and many test items. See also test object. A reason or purpose for designing and executing a test.

A.   Finite state machine

B.   Capture/playback tool

C.   Test item

D.   Master test plan

4: An element of configuration management - consisting of the recording and reporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification - the status of proposed

A.   Test execution

B.   Audit trail

C.   Status accounting

D.   Defect

5: Acronym for Computer Aided Software Engineering.

A.   Test logging

B.   Decision testing

C.   CASE

D.   Test comparator

6: The activity of establishing or updating a test plan.

A.   Performance

B.   Test manager

C.   Unit test framework

D.   Test planning

7: The degree to which a component - system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]

A.   Incident report

B.   Dynamic analysis

C.   Quality

D.   Outcome

8: Two persons - e.g. two testers - a developer and a tester - or an end-user and a tester - working together to find defects. Typically - they share one computer and trade control of it while testing.

A.   Business process-based testing

B.   Pair testing

C.   Instrumenter

D.   Efficiency

9: A high level metric of effectiveness and/or efficiency used to guide and control progressive test development - e.g. Defect Detection Percentage (DDP).

A.   Testability

B.   Test performance indicator

C.   Safety

D.   Configuration control

10: The percentage of boundary values that have been exercised by a test suite.

A.   Process improvement

B.   Boundary value coverage

C.   Procedure testing

D.   Process

A.   Continuous representation

B.   Negative testing

C.   Formal review

D.   User acceptance testing

12: A tool that supports the recording of requirements - requirements attributes (e.g. priority - knowledge responsible) and annotation - and facilitates traceability through layers of requirements and requirements change management. Some requirements ma

A.   Efficiency

B.   Requirements management tool

C.   Actual result

D.   Test tool

13: A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.

A.   Release note

B.   Portability testing

C.   Retrospective meeting

D.   Best practice

14: A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg - IEEE 1028] See also peer review. A review of a software work product by colleagues

A.   Walkthrough

B.   Test basis

C.   Top-down testing

D.   Path

15: A flaw in a component or system that can cause the component or system to fail to perform its required function - e.g. an incorrect statement or data definition. A defect - if encountered during execution - may cause a failure of the component or sys

A.   LCSAJ testing

B.   Negative testing

C.   Resumption criteria

D.   Defect

16: Testing to determine the extent to which the software product is understood - easy to learn - easy to operate and attractive to the users under specified conditions. [After ISO 9126]

A.   Test basis

B.   Usability testing

C.   Quality attribute

D.   Use case testing

17: Hardware and software products installed at users' or customers' sites where the component or system under test will be used. The software may include operating systems - database management systems - and other applications.

A.   Availability

B.   Quality assurance

C.   Re-testing

D.   Operational environment

18: A software tool used to carry out instrumentation.

A.   Risk control

B.   Condition determination testing

C.   Risk type

D.   Instrumenter

19: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers - to determine whether or not a component or system satisfies the user/customer needs and fits within the business process

A.   Test case suite

B.   Configuration control board (CCB)

C.   Recoverability

D.   Beta testing

20: A variable (whether stored within a component or outside) that is read by a component.

A.   Ad hoc testing

B.   Random testing

C.   System testing

D.   Input

21: A statement of test objectives - and possibly test ideas about how to test. Test charters are used in exploratory testing. See also exploratory testing. An informal test design technique where the tester actively controls the design of the tests as t

A.   Installability testing

B.   Software Usability Measurement Inventory (SUMI)

C.   Maintenance

D.   Test charter

22: The behavior predicted by the specification - or another source - of the component or system under specified conditions.

A.   Verification

B.   Expected result

C.   Test performance indicator

D.   Master test plan

23: The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.

A.   Test management tool

B.   Defect based test design technique

C.   System

D.   Test comparison

24: The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISO 9126] See also reliability. The ability of the software product to perform its required functions

A.   Recoverability

B.   Performance profiling

C.   Driver

D.   Project risk

25: A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.

A.   Mutation analysis

B.   Defect management

C.   Defect based test design technique

D.   Concurrency testing

26: The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]

A.   Test oracle

B.   Regression testing

C.   Portability

D.   Fault seeding

27: A pointer within a web page that leads to other web pages.

A.   Adaptability

B.   Wide Band Delphi

C.   Hyperlink

D.   Scripted testing

28: A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).

A.   Master test plan

B.   Performance

C.   Incident logging

D.   Multiple condition testing

29: The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. [ISO 9126] See also portability. The ease with which the software product can be transferred from one hardw

A.   Test progress report

B.   Performance testing tool

C.   Replaceability

D.   LCSAJ

30: A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads - or with reduced availability of resources such as access to memory or servers. [After IEEE 610] See also pe

A.   Project

B.   User acceptance testing

C.   Precondition

D.   Stress testing