Testing/product evaluations
Testing/product evaluations are structured activities that verify and validate deliverables by inspecting, measuring, and exercising them against defined requirements and acceptance criteria. They provide objective evidence of quality, reveal defects early, and support acceptance decisions.
Key Points
- Combines inspections, measurements, and execution-based tests to confirm conformance and fitness for use.
- Uses both static techniques (reviews, walkthroughs) and dynamic techniques (running the product under expected and edge conditions).
- Planned and performed iteratively; scope and depth are guided by risk, complexity, and lifecycle stage.
- Built on clear, measurable acceptance criteria and quality attributes with defined entry and exit criteria.
- Requires representative environments and data and maintains traceability from requirements to tests and results.
- Captures results and defects, tracks metrics such as pass rate, defect density, and coverage, and feeds improvements back into the plan.
Quality Objective
Provide reliable, objective evidence that the deliverable meets requirements, satisfies stakeholder expectations, and complies with applicable standards or regulations. Reduce the cost of quality by detecting issues early and confirming that the solution is usable, safe, and capable under real-world conditions.
Method Steps
- Define test strategy aligned with objectives, risks, and compliance obligations.
- Identify the test basis (requirements, user stories, designs) and establish acceptance criteria and quality attributes.
- Plan environments, tools, configurations, and prepare representative test data.
- Design test cases and checklists; choose sampling approach and prioritize tests using risk-based criteria.
- Set entry and exit criteria; prepare scripts, charters, and expected results.
- Execute inspections and tests; capture evidence; log defects with severity, steps to reproduce, and ownership.
- Analyze results and perform root cause analysis; prioritize fixes and raise change requests when needed.
- Re-test and run regression tests; verify acceptance; obtain approvals and communicate outcomes; update lessons learned.
Inputs Needed
- Approved requirements, user stories, acceptance criteria, and quality attributes.
- Designs, specifications, models, prototypes, and the deliverables to be evaluated.
- Test strategy/plan, organizational policies, standards, and regulatory criteria.
- Test environment, tools, configurations, and prepared test data sets.
- Risk register, prioritization rules, and configuration baselines.
- Stakeholder roles and availability for evaluations and sign-off.
Outputs Produced
- Test results and evidence (logs, measurements, screenshots, checklists).
- Defect and issue log with status, severity, ownership, and resolution.
- Traceability updates linking requirements to test cases and outcomes.
- Quality metrics and reports, including coverage, pass rate, and trends.
- Acceptance records or sign-offs, change requests, and updated backlog or plans.
- Lessons learned and recommended process improvements.
Acceptance/Control Rules
- Entry criteria: approved test basis, correct deliverable version, ready environment, prepared data, and available stakeholders.
- Exit criteria: defined pass-rate thresholds, zero unresolved high-severity defects, coverage targets met, and compliance checks passed.
- Sampling and tolerances: specify sample size, acceptable quality level, and confidence levels when full testing is impractical.
- Independence: clarify who performs evaluations and who authorizes acceptance to avoid conflicts of interest.
- Evidence and auditability: capture reproducible results with time stamps and configuration identifiers.
- Defect handling: triage rules, prioritization, retest and regression requirements before closure.
- Change control: define when failures trigger change requests versus standard defect fixes.
Example
A project introduces a new customer onboarding process. The team conducts a document review of procedures (static), simulates typical and peak-volume scenarios (dynamic), times each step, and checks compliance with policy. Defects found in handoffs are logged and corrected. After re-testing and meeting exit criteria, the sponsor signs off on the process for pilot rollout.
Pitfalls
- Vague or untestable acceptance criteria leading to disputes and rework.
- Testing late in the lifecycle, increasing the cost and impact of defects.
- Non-representative environments or data causing false positives or missed issues.
- Focusing only on happy paths and ignoring edge cases and nonfunctional attributes.
- Weak traceability and coverage, making impacts of change hard to assess.
- Insufficient independence or stakeholder involvement, reducing objectivity.
- Not updating and rerunning regression tests after changes.
PMP Example Question
During planning, the team identifies several high-risk quality areas but has limited time for verification. What should the project manager do to ensure objective evidence of quality while managing constraints?
- Reduce the number of tests and rely on developer self-approval to save time.
- Prioritize tests using risk and acceptance criteria, define entry and exit criteria, and run iterative evaluations.
- Defer all testing until a single end-of-project customer acceptance cycle.
- Execute only happy-path functional tests to meet the schedule.
Correct Answer: B — Prioritize tests using risk and acceptance criteria, define entry and exit criteria, and run iterative evaluations.
Explanation: Risk-based prioritization with clear criteria focuses limited effort on what matters most and provides objective evidence. The other options defer or weaken evaluation and increase the likelihood of undetected defects.
HKSM