Using Key Performance Indicators as Quality Measurements

All successful software organizations implement measurement as part of their day-to-day management and technical activities. Measurement provides the objective information they need to make informed decisions that positively impact their business and engineering performance. In successful software organizations, measurement-derived information is treated as an important resource and is made available to decision makers throughout all levels of management.

KPIs (Key performance indicators) are a way to measure the quality of a software product or activity.

Slip through – Phase Containment Effectiveness
When were the problems created/induced versus when they were found (it’s much cheaper to solve a requirements problems in the design phase than in NIT testing)

Definition: Measures supplier’s ability to capture faults before making deliveries to I&V

    • Assuming that supplier conducts Function Testing (FT);

    • Supplier or external organization may conduct I&V (Integration and Verification) Testing.

Based on Trouble Report (TR) slippage between FT and I&V test phases.

   • Assuming that TRs are analyzed to identify ‘true’ slipped TRs;

   • If TRs are not analyzed, then 0% may not be the expected best result due to the different scope in FT and I&V testing.

 Result format: Reported as a percentage, 0% is the lowest result.

Formula: [1 – FT Faults / All Faults] x 100%

Faults are classified as FT or I&V based on testing phase, not who does the testing. All parties conducting the testing need to capture the Function Test and I&V Faults, based on assignment TR Handling guidelines/tools.

Frequency: Monthly from start to end of I&V (cumulative data collecting); or at each ‘drop’ on completion of the respective I&V.

TRs that do not relate to ‘genuine’ faults, i.e. cancelled, postponed, duplicated, and rejected TRs, are to be excluded. All ‘minor’ faults, faults that do not affect the main operation of the system are to be excluded.

TR closure rate

Definition: Measures supplier’s ability to answer TRs within the specified goals. It is based on deviation between the actual TR answering times and TR goals, set by the Assignment Owner.

Result format: Reported as lost days, averaged across TR priority. The lowest result is 0, indicating that the TRs are answered within the goals.

Formula: NLD / (OTR+ NTR)

NLD = number of lost days within the time increment for all open and new TRs

OTR = number of open TRs at beginning of the time increment

NTR = number of new TRs during time increment

The TR handling time starts at the point at which the TR enters the supplier organization, and ends at the point at which the TR is answered. Time increment is typically 12 months in the past from reporting date.

 Frequency:Measurement is done on a monthly basis.

Requirements/test cases traceability
100% mapping between test cases and requirements (at least one test covering a requirement). Requirements traceability matrix to be used.

Design vs requirements traceability
(Req ID mapped per functions). Requirements traceability matrix to be used.

Design vs development/code traceability (a report from dev/SA)
Requirements traceability matrix to be used.

Assignment content adherence: requirements agreed/requirements implemented.
Definition: Measures supplier’s ability to deliver full assignment scope by end of assignment. It is based on percentage of completed functionality/requirements.

Result format:Reported as a percentage, 100% is the highest result.

Formula: (No. of Compl. Req. / No. of Commit. Req.) x 100

Requirements are smallest measurable ‘packages’ of functionality; e.g. features. Number of Completed Requirements counts packages of functionality delivered during the entire assignment. Total Number of Committed Requirements counts the packages of functionality originally planned for the assignment; may be revised based on Change Request guidelines.

Frequency: Measured and reported at the end of an assignment.

KPI measurement has to be based on requirements that are the smallest objects of measurement and easily measurable. For example, content adherence for an assignment with 2 major deliveries should not be based at the ‘delivery level’ but rather based at the core functionalities/requirements within each delivery.

Coding style: code review should be used.

Schedule adherence

100x[(1-ABS(Actual-planned))/planned]

Definition: Measures timeliness and ‘quality’ of deliveries relative to baseline schedule and acceptance criteria. Based on percentage deviation between planned and actual lead times.

Result format: Reported as a percentage, 100% is the highest result.

Formula: [1 – ABS (ALT – PLT) / PLT] x 100

PLT = Planned Start Date – Planned Finish Date

ALT = Actual Finish Date – Planned Start Date

Cost adherence 

100x[(1-ABS(Actual-planned))/planned]

Definition: Measures supplier’s ability to deliver assignment scope within the agreed/committed cost, including manhour, lab and travel costs. Based on deviation between committed (baseline) and expected (actual + forecast) costs at assignment/deliverable level.\

Result format: Reported as a percentage, 100% is the highest result.

Formula: [1 – (ECost – CCost) / CCost] x 100%

Committed cost is the baseline at assignment start. Contingency value (buffer) should be specified separately, if known. Expected Cost to Complete is (actual + forecast) each month:

• Actual costs incurred so far;

• Forecast of all remaining Costs to Complete;

• Forecast of contingency sums (optional).

Delivering an Assignment under the Committed Costs will have neutral impact on the KPI. Aim is to discourage unnecessarily using budgeted hours;

Frequency: Measured monthly at assignment level, or at end of each major deliverable.

Costs have to be defined at assignment level (mandatory), and optionally (if possible) at deliverable level, to enable precise change control.

Effort adherence

100x[(1-ABS(Actual-planned))/planned]