Releasable Product Increments?
Updated: Feb 28, 2022
One of the expectations of implementing Scrum is creating potentially releasable increments in every iteration. This implies testing User Stories in the same Sprint as they are being developed. Scrum has changed our approach not only in development but testing also – the process has become even more challenging. These are some issues typically experienced by team members practicing Scrum:
Acceptance Criteria is ill-defined or does not even exist for each User Story,
User Stories are too big and not well defined for a developer and a tester to go over the develop-test-fix-test cycle before the sprint times out,
lack of knowledge and communication between team members,
infrequent deployments create delays,
flawed processes and tooling.
If you are not producing versioned software increments in each Sprint iteration, the problem is worse, as you can not even test against a tagged version of provisioned code.
As mentioned – the problem is also in the tools and/or processes. If you can not detect and count defects, you have a long way to go in mastering the delivery of quality software. One of the metrics that we use to measure a team’s ability to deliver releasable increments is Defect Leakage KPI.
First, we have to understand the context of Defect Leakage. In Scrum, the team forecasts the delivery of potentially releasable valuable features in each iteration. That implies (near) bug-free code deployed to a presentable runtime environment. Preferably, we should catch most of the bugs in the same Sprint as the User Story is being developed in. The number of detected defects after iteration should be as small as possible. Defect Leakage KPI shows the relationship between the number of detected defects after an iteration and the number of detected defects during iteration. The image below represents the chart for Defect Leakage, where each bar represents the ratio of detected defects after a sprint with defects detected during the Sprint. Ideally, that ratio is zero.
Next, we’ll see how to capture data for Defect Leakage KPI in JIRA. As already explained, we need two pieces of information for each Sprint:
a number of defects were detected during Sprint and
a number of defects were detected after the Sprint.
One approach to distinguishing those two kinds of defects is using the ‘Labels’ field in JIRA to mark a weakness for the appropriate group. For example – you can choose two labels named after-sprint and during-sprint.
This is not the only way to classify defects. For example, you could configure JIRA to use custom fields with predefined values. When we have defects marked as described above, we can execute JQL queries to get the data for the Defect Leakage chart.
PROJECT = "AT" AND SPRINT = "AT09" AND ISSUETYPE = BUG AND LABELS = DURING-SPRINT
Assuming the two JQL queries return:
8 for the number of defects captured during the Sprint
3 for the number of defects caught after the Sprint
The mechanics for providing data sets to feed the chart is manually editing the two input fields for this chart. The other is by using Agile Tools REST API.
The effort needed to provide this data must be minimal. With a pragmatic approach and a little discipline, teams can get an insight into their ability to deliver potentially releasable increments.
The Defect Leakage is a measure in the Ability to Innovate area, one of the four Key Value Areas we will cover in Agile Tools. Only in the context of other metrics and with the proper interpretation will the numbers tell the story – as they will – in our tool.
The real value and your takeaway is that by observing Defect Leakage, you will be changing how you develop software. You will have to master how to correctly write and size User Stories and work in short develop-test-fix-test cycles. The benefit is to break silos, develop more full-stack competencies, have more short conversations between the developer and a (dedicated) tester, and, most importantly, deliver value to your customers.
Update: In early 2022, a product reboot has happened, which pivoted the product much more towards setting goals (OKRs) but still has its roots in teamwork, metrics, and evidence-based management.