Answer these 100+ Software Quality Assurance MCQs and assess your grip on the subject of Software Quality Assurance.
Scroll below and get started!
A. User attemptation testing
B. User acceptance testing
A. Validation testing
B. Unit testing
C. System testing
D. Integration testing
A. black-box
B. white-box
C. glass-box
D. grey-box
A. an old method of calculating the distance between two objects
B. a principle or mechanism by which we can tell if the software is working according to someone’s criteria;
C. No such thing!
D. a method by which learning takes place as a result of discoveries informed by exploration.
A. To assess whether the software is ready for release.
B. To find faults in the software.
C. To prove that the software is correct.
D. To demonstrate that the software doesn’t work.
A. Statements in the program
B. Independent logic paths in the program
C. Errors in the program
D. Cycles in the program
A. It is discovered that the requirement it is suggesting to implement already exists.
B. Its priority is very high and it needs to be handled immediately.
C. The testing team is charged with seeing it through.
D. It is technologically complex.
A. False
B. True
A. False
B. True
A. Exercise the logical conditions in a program module
B. Focus on testing the validity of loop constructs
C. Rely on basis path testing
D. Select test paths based on the locations and uses of variables
A. Customer satisfaction
B. Defect tracking
C. Teamwork
D. Configuration control
A. Performance errors
B. Typos and logic errors
C. Typographical errors
D. Logic errors
E. Behavioral errors
A. Initial, Managed, Defined, Quantitatively Managed, Optimizing
B. Neither of these
C. Functionality, Reliability, Usability, Efficiency, Maintainability, Portability
A. Tool used for non functional testing
B. free ware open source tool
C. web based testing tool
D. above all
E. Automation tool
A. White box
B. Gray Box
C. JUnit test
D. Integration Testing
E. Black Box
A. Session-based testing is a software test method that aims to combine accountability and exploratory testing to provide rapid defect discovery, creative on-the-fly test design.
B. Session-based testing is a software test method that involves the tester recording their behaviours to be reviewed at a later stage.
C. Session-based testing is a software test method that aims to up skill junior testers who might not be as comfortable with the system under test.
D. Session-based testing is a software test method that aims to combine speedy results with the expectations of an agile team.
A. focus on testing the validity of loop constructs
B. select test paths based on the locations and uses of variables
C. exercise the logical conditions in a program module
D. rely basis path testing
A. Internal program design and code
B. Requirements and functionality
C. How an application functions under heavy loads
D. Code statements, branches, paths and conditions
A. Compatibility: most emulators are incompatible with popular versions of MS Windows.
B. Installation issues: installing and handling emulators is more cumbersome than handling real hardware.
C. Lack of reliability: emulators might not represent the hardware constraints adequately.
D. Lack of reliability: emulators might not represent the user interface adequately.
A. True
B. False
A. Select test paths based on the locations and uses of variables
B. Exercise the logical conditions in a program module
C. Rely on basis path testing
D. Focus on testing the validity of loop constructs
A. Exception
B. Alpha
C. Beta
D. Black Box
A. Extreme Programming
B. Waterfall Method
C. Agile Development
D. Test Driven Development
A. Usage Exploration
B. User's Experience
C. Under Existence
D. Usability Experience
A. Incorrect or missing functions
B. Incorrect or missing functions, and interface errors and performance errors
C. Interface errors
D. Performance errors
E. None of these
A. Alpha tests are conducted by the user, while Beta tests are done by the testing team.
B. Alpha tests are done by the user under the supervision of the testing team at the test lab, while Bets tests are done by the user in user's facilities without close supervision.
C. Beta tests are conducted by the user, while Alpha tests are done by the testing team.
D. Beta tests are done by the user under the supervision of the testing team at the test lab, while Alpha tests are done by the user in user's facilities without close supervision.
A. A formal testing of the software
B. An informal meeting for evaluation or informational purposes
C. An in-depth look into how the software functions
D. None of these
A. Severity is determined primarily by the technical factors, while Priority is determined by business-related factors.
B. Severity is determined by the developers and business analysis, while priority - by the testing team.
C. Priority is determined primarily by the technical factors, while Severity is determined by business-related factors.
D. Severity is determined by user expectations, while Priority - by the impact on the functionality.
A. Holistic Test Security Model
B. Human Test Strategy Model
C. Heuristics Test Strategy Model
D. High Tech Strategy Management
E. Hollow Team Strategy Model
A. Sampling them rather than try to test some huge set of inputs.
B. A special type of Universal Resource Identifier (URI).
C. Encryption that can protect the channel over which your conversation happens.
D. A key-value pairs with an equals sign (=) between the key and the value.
A. Plan - Act - Check - Do
B. Plan - Do - Check - Act
C. Plan - Check - Act - Do
D. Plan - Check - Do - Act
A. Usability
B. Performance
C. Black Box
D. Functionality
A. Users
B. Developers
C. Customer Support
D. Test Engineers
A. Quality Assurance
B. Detective control
C. Quality control
D. Corrective control
A. A process in which the tests only takes boundary values for testing
B. A process in which the tester takes boundary values and middle values for testing
A. A reader
B. A recorder
C. The developer
D. A moderator
A. Suggesting a release schedule.
B. Presenting the quality status of the application-under-test to decision makers.
C. Making the decision whether to release the system-under-test into production.
D. Playing user's advocate by questioning design decisions.
A. Major control functions can be tested early.
B. The need for stub programs is eliminated.
C. Test conditions are easier to create.
D. Observation of test results is easier.
A. Inability to read data off the app fields.
B. Inability to automate localization tests that require configuraiton changes at the device level.
C. Inability to compare images.
D. Inability to automate clicking app objects and make dropdown selections.
A. The procedure is what must happen and the process is the step-by-step of how it will happen
B. The procedure is who executes the necessary testing and the process is when it will be executed
C. The process is who executes the necessary testing and the procedure is when it will be executed
D. The process is what must happen and the procedure is the step-by-step of how it will happen
A. Configuration Control Board
B. Software Development Team
C. Change Control Board
D. Customer Liaison
A. Defects are deviations from requirements, while Change Requests are suggestions how to alter requirements.
B. Defects are logged by Test Engineers, while Change Requests - by Business Analysts.
C. They are handled by different logging systems.
D. Defects always have a risk associated with their resolution, while Change Requests do not.
A. Beta Testing
B. Alpha Testing
C. Maintenance Testing
D. System Testing
E. Integration Testing
A. True
B. False
A. Delivered on time
B. Meets requirements and expectations
C. Delivered within budget
D. Completely bug-free
A. Black Box Technique
B. Glass Box Technique
C. White Box Technique
A. Usability testing
B. Security testing
C. Functional testing
D. Performance testing
A. Subversion
B. Git
C. CVS
D. Visual Studio
A. Testing same module After Bug is fixed
B. Testing the effected modules after the bug fixed
C. Without fixed If we test Defect Again
A. Deal only with interfaces.
B. Focus on system's behavior under stress.
C. Not going to be repeated.
D. Are well-documented and easy to execute manually.
A. Usability Testing
B. Alpha Testing
C. Beta Testing
D. User Acceptance Testing
A. taking only with in the range values
B. Only take boundary values for testing
C. In which we take boundary values and middle value for testing
A. Defects by the number of re-opens (rework).
B. Defects by status and severity.
C. Defects by developer assignment.
D. Defects by priority.
A. Design diagram
B. List of features to be tested and not tested
C. Assumptions/Preconditions
D. Introduction
A. Establish test engineer's personal accountability
B. Provide test data
C. Ensure coverage
D. Document test results
A. Imitate short bursts of concurrent users while measuring memory use.
B. Exercise the functionality of the system-under-test repeatedly over time while measuring memory use.
C. Focus on the database fill-up and specifically on Log tables.
D. Measure response time on the client and on the server.
A. Design
B. Maintenance
C. Performance
D. Requirements
E. Coding
A. behavioral
B. grey-box
C. black-box
D. white-box
A. Initiate code inspections to identify defects in code
B. Encourage the programmers to try harder to make fewer defects
C. Increase the type and extent of testing to remove defects before production
D. Classify and count the defects so that you can identify the defect of highest frequency and eliminate the root cause of the defect
A. V-Model
B. Linear Model
C. Spiral Model
D. Waterfall Model
A. This is an iterative and incremental software development process and this can aim depends on the features.
B. None of the above
C. This is an Iterative and incremental approach that emphasizes on the continuous user involvement.
D. This is a technique which has short iterations where new test cases covering the desired improvement or new functionality are written first.
A. True
B. False
A. Test Plan
B. Specification
C. Requirements Document
D. Test Case
A. Module
B. Trace node
C. Test bed
D. Batch
A. Robotium
B. QTP
C. Selenium
D. Jmeter
A. Design, Requirements, Implementation, Verification, Maintenance
B. Requirements, Design, Verification, Implementation, Maintenance
C. Requirements, Design, Implementation, Verification, Maintenance
D. Maintenance, Requirements, Design, Implementation, Verification
A. Project Manager
B. Customer
C. Developers
D. Software Engineer
A. Documentation of test scripts, metric tracking, load testing
B. Boundary checking, Ad-hoc testing, installation combination
C. Testing new features and functions, data integrity checks, regression testing
A. QA is part of the software testing process
B. Software testing is part of the QA process
C. Software testing and QA are both the same thing
D. Software testing and QA are two different processes
A. True
B. False
A. configuration control
B. change request
C. specification
D. software module
A. Software Quality Assurance
B. Planning
C. Testing
D. Software Development
A. False
B. True
A. ISO 9002
B. ISO 9000
C. ISO 9003
D. ISO 9001
A. You can not compare numbers in such set
B. >
C. =
D. <
A. True
B. False
A. Reliability
B. Usability
C. Cost
D. Correctness
A. Test documentation
B. Procedure testing
C. Structural testing
D. Functional testing
A. User Acceptance
B. Stress
C. Usability
D. Survivability and recovery
A. All of these are quality models
B. ISO 9000
C. ISO/IEC 15504
D. CMMI
A. Integration Testing
B. White (Glass) Box testing
C. Black Box Testing
D. Gray Box Testing
A. True
B. False
A. True
B. False
A. Customer acceptance
B. Boundary conditions
C. Negative testing
D. Error handling
A. Stability and observability
B. Observability, simplicity, and stability
C. Observability
D. Simplicity
E. Stability
A. Black box testing
B. Unit testing
C. None of these
D. White box testing
A. Documentation
B. Test
C. Reviews and audits
D. Budget
A. None of these
B. Capability Maturity Model Integration
C. Connection Maturity Management Initiation
D. Capacity Maturation Modular Institute
A. To make recommendations
B. To find problems and see what's missing
C. To fix the software
D. To gather preliminary information
A. No
B. Yes
A. SQL injection
B. Equivalence partitioning
C. Alpha tests
D. Eye tracking
A. alpha and beta
B. Positve and negative
A. Both errors in design and errors in implementation
B. Errors in design
C. Errors in accuracy
D. Errors in implementation
E. Errors in operation
A. False
B. True
A. Support Change Management
B. Software Change Manager
C. Supply Chain Management
D. Software Configuration Management
A. Bottom Up Approach
B. All of above
C. Big Bang Approach
D. Top Down approach
A. Functional Testing
B. System Testing
C. Regression Testing
D. Acceptance Testing
A. Business Analysts
B. Users
C. Test Engineers
D. Developers
A. Schedule
B. (all of these)
C. Approach
D. Scope
E. Resources
A. Number of unknown software bugs.
B. Number of words in the test plan.
C. Number of code lines executed in the software being tested.
D. Number of test cases that passed vs failed.