METRICS FOR ENGINEERING EXCELLENCE
In this webinar hosted by Aritha, Kiran Kashyap, Agile Change Agent at Lowe’s and Vivek Ganesan, Co-founder & Agile/DevOps coach at Ampyard, discussed about ‘Metrics for Engineering Excellence’. Drawing from their experience in working with organizations of different sizes, Kiran and Vivek explained how some commonly used metrics are bad for creating a culture of engineering excellence.
Kiran and Vivek said that they chose this topic for presentation because they wanted to share some pieces of their upcoming book. The book now has a working title ‘Metrics for Agile Tech Teams’. They also mentioned about how they, along with Guru Thimmapuram, are hoping to help organizations in setting meaningful goals.
The speakers set the context by defining Engineering Excellence as ‘building a product that you can be proud of.’
The speakers laid an outline that they will be discussing three pairs of metrics.
Each pair would have two metrics.
- Metric 1: Something that is generally considered useful but gives bad results
- Metric 2: Something that can be used in the place of Metric 1
Code Coverage: Code coverage is a popular metric in agile engineering teams. However, it is not a good measurement for quality. Code coverage tells us whether there are tests. It does not tell us if the tests are of good quality.
Mutation Coverage: Mutation testing is a practice where we intentionally add bugs to the code and check if the test cases catch those bugs. This tells us whether tests are of good quality or not.
Build Failure Count: Since build failure is treated as a negative thing, many organizations measure the build failure count and try to reduce it over a period of time. However, this creates a culture of blame and prevents people from taking risks. Also, red builds are not a bad thing – they are the Return-on-Investment of Continuous integration system.
Total Build Red Time: Instead of measuring the build failure count, teams can measure the total time for which the build was red in a day, week or a month and try to reduce the time progressively. Here, we are focusing on fixing rather than blaming. This makes people motivated to fix the build as soon as possible, thereby reducing the development downtime.
Bugs per Developer: Organizations commonly measure defects per developer or defects per team with an aim to reduce the number over a period of time. Again, like the Build Failure Count above, this creates a culture of blame and prevents people from collaborating towards a common goal, as they see one person’s victory as another person’s failure.
Support Case Density: For each feature that they deliver, the teams can wait for a defined period and measure how many support cases were created related to that feature. They can then divide this number by the size of the feature in story points to get the support case density. Over a period of time, they aim to reduce this number. This is a better metric for quality because this focuses on usability in addition to defect-prevention.
Here are some of the questions that were asked in the webinar.
- What programming languages do mutation testing tools support?
- Is there a leading indicator to prevent getting a bad Support case density?
- How is mutation testing different from negative testing?
Aritha has a team of qualified and committed professionals who have come together to build a solid workforce that diligently delivers technology solutions helping customers solve various business problems.