Manual Testing Interview Q&A

Basic Manual Testing Questions

  1. What is Software Testing?

    • Software testing is the process of evaluating a software application to ensure it meets the specified requirements and is free of defects.
  2. What is the difference between verification and validation?

    • Verification ensures the product is built according to the requirements, while validation checks if the product meets the user's needs.
  3. What are the different levels of testing?

    • Unit Testing, Integration Testing, System Testing, and Acceptance Testing.
  4. What is the difference between functional and non-functional testing?

    • Functional testing focuses on verifying the software’s functionality, while non-functional testing checks aspects like performance, usability, and reliability.
  5. What is a test case?

    • A test case is a set of conditions or variables used to determine whether a software application behaves as expected.
  6. What is a test plan?

    • A test plan is a document outlining the testing scope, objectives, resources, schedule, and activities to be carried out during testing.
  7. What is regression testing?

    • Regression testing involves re-running tests on a modified application to ensure that changes have not adversely affected existing functionality.
  8. What is a bug or defect?

    • A bug is an error, flaw, or failure in software that causes it to produce incorrect or unexpected results.
  9. What is the difference between severity and priority?

    • Severity refers to the impact of a defect on the application, while priority indicates the urgency of fixing the defect.
  10. What is exploratory testing?

    • Exploratory testing is an informal testing approach where the tester actively explores the application without predefined test cases.

Intermediate Manual Testing Questions

  1. What is boundary value analysis?

    • Boundary value analysis is a test design technique that focuses on the values at the boundaries of input domains.
  2. What is equivalence partitioning?

    • Equivalence partitioning is a technique where input data is divided into partitions that are treated as equivalent for testing purposes.
  3. What is a test environment?

    • A test environment is the setup of hardware, software, and network configurations needed to execute test cases.
  4. What is smoke testing?

    • Smoke testing is a preliminary test to check whether the basic functionalities of an application are working before conducting detailed testing.
  5. What is sanity testing?

    • Sanity testing is a subset of regression testing performed to verify that a specific section of the application is still functioning after changes.
  6. What is the difference between alpha and beta testing?

    • Alpha testing is done by developers or internal testers, while beta testing is conducted by end-users or external testers before the final release.
  7. What is a test script?

    • A test script is a set of instructions for executing a test case, often used in automated testing but can be manual as well.
  8. What is a defect life cycle?

    • The defect life cycle refers to the progression of a defect from its identification to its resolution.
  9. What is the difference between system testing and acceptance testing?

    • System testing validates the complete and integrated application, while acceptance testing checks if the software meets business requirements.
  10. What is test data?

    • Test data is the data that is used to execute test cases. It can be static or dynamic, depending on the test scenario.

Advanced Manual Testing Questions

  1. What is a test harness?

    • A test harness is a collection of software and test data used to test a program by running it under different conditions.
  2. What is the purpose of a test summary report?

    • A test summary report provides an overview of testing activities, including results, metrics, defects, and recommendations.
  3. What is end-to-end testing?

    • End-to-end testing involves testing the entire workflow of an application, from start to finish, to ensure it behaves as expected.
  4. What is the difference between white-box testing and black-box testing?

    • White-box testing involves testing internal structures or workings of an application, while black-box testing focuses on testing external functionalities without knowledge of internal code.
  5. What is the purpose of a traceability matrix?

    • A traceability matrix ensures that all requirements are covered by test cases and helps in tracking defects back to their requirements.
  6. What is the difference between retesting and regression testing?

    • Retesting is testing a specific defect after it has been fixed, while regression testing checks if recent changes have affected other parts of the application.
  7. What is the purpose of test closure activities?

    • Test closure activities involve finalizing and archiving test results, documents, and any related artifacts after testing is complete.
  8. What is risk-based testing?

    • Risk-based testing prioritizes testing activities based on the risk of failure and the impact of defects on the application.
  9. What are entry and exit criteria in testing?

    • Entry criteria are the conditions that must be met before testing begins, while exit criteria define the conditions for completing a testing phase.
  10. What is usability testing?

    • Usability testing assesses how easily users can interact with a software application, focusing on user experience.

Scenario-Based Questions

  1. How would you approach testing a new feature in an existing application?

    • Understand the feature requirements, create test cases, conduct functional testing, perform regression testing, and validate against acceptance criteria.
  2. How would you handle a situation where a high-priority defect is found close to the release date?

    • Evaluate the impact, communicate with stakeholders, prioritize the defect for fixing, retest, and assess if a release is still feasible.
  3. Describe how you would perform exploratory testing on a login page.

    • Explore different combinations of valid/invalid credentials, boundary values for input fields, error messages, and session handling.
  4. What would you do if a defect is rejected by the development team?

    • Reproduce the defect, provide detailed evidence, and discuss it with the development team to reach a resolution.
  5. How would you prioritize test cases when you have limited time?

    • Focus on critical functionalities, high-risk areas, and test cases that cover the most frequently used parts of the application.
  6. How would you manage test data for a complex application?

    • Use data management tools, create reusable data sets, ensure data consistency, and manage sensitive data securely.
  7. How would you approach testing a mobile application?

    • Test across multiple devices, screen sizes, operating systems, perform compatibility, usability, and performance testing.
  8. How would you ensure the quality of a software product under tight deadlines?

    • Focus on critical test cases, automate repetitive tasks, perform risk-based testing, and maintain clear communication with the team.
  9. What steps would you take if you discover a major defect during user acceptance testing (UAT)?

    • Immediately report the defect, assess its impact, collaborate with the development team for a fix, and retest.
  10. How do you approach testing a complex integration scenario?

    • Understand the interfaces, create detailed integration test cases, simulate real-world scenarios, and ensure data flow integrity.

Tool and Technology Questions

  1. What tools do you use for manual testing?

    • Common tools include JIRA for bug tracking, TestRail for test management, and Postman for API testing.
  2. How do you manage test cases and test execution?

    • Test cases and execution are managed using tools like TestRail, Zephyr, or Excel sheets, depending on the project requirements.
  3. What is the role of SQL in manual testing?

    • SQL is used to query databases for verifying data integrity, checking database operations, and validating test cases involving data.
  4. What is API testing, and how do you perform it?

    • API testing involves testing application programming interfaces to ensure they function correctly. It can be done using tools like Postman or SoapUI.
  5. How do you perform cross-browser testing?

    • Cross-browser testing involves testing the application across different browsers and versions using tools like BrowserStack or manually.
  6. How do you perform performance testing manually?

    • While most performance testing is automated, manual performance testing can involve monitoring response times, resource usage, and conducting load tests under controlled conditions.
  7. How do you handle configuration testing?

    • Configuration testing involves testing the application on different hardware and software configurations to ensure compatibility.
  8. What is the role of a version control system in testing?

    • A version control system helps manage changes to test cases, test scripts, and documentation, ensuring consistency across versions.
  9. What is the significance of continuous integration (CI) in manual testing?

    • CI helps integrate and test code changes frequently, allowing for early detection of defects, even in a manual testing environment.
  10. What is the importance of test automation in manual testing?

    • Test automation complements manual testing by handling repetitive tasks, enabling faster test execution, and allowing testers to focus on complex scenarios.

Behavioral and Experience-Based Questions

  1. How do you handle stressful situations during testing?

    • Prioritize tasks, stay organized, communicate effectively, and focus on finding solutions rather than problems.
  2. Describe a time when you found a critical bug just before the release. How did you handle it?

    • Provide an example from your experience, highlighting your problem-solving skills, communication, and impact on the project.
  3. How do you keep up with new testing methodologies and tools?

Comments