Small Releases Increase Quality and Velocity
Successful executives fight for their customers, striving to deliver the highest value and the most bang-for-the-buck. For software Product companies, that frequently means squeezing as much value—translated as features—into each release to maximize the impact.
Although it may seem counterintuitive, companies that truly care about quality and taking good care of their customers have learned that small, frequent releases yield both higher quality software and higher velocity of the product delivery team. Let’s walk through a few examples to illustrate this important point.
As software releases grow in size the probability of failure increases. In addition, because larger releases generally touch more individual components of any given product, the impact of a failure also increases. Let’s look first at the probability of an incident.
Three primary factors contribute to failures in new software:
- The number of components (physical or virtual) required to complete a service call
- The overall release size (number of lines of code)
- The depth and quality of testing
Likely, a software feature addition will not significantly increase the number of components involved in satisfying a software service call, so we’ll bypass this factor for now...
Every line of code written or modified in any software change introduces a percentage chance of failure and increases the probability—albeit small—of an incident when the software is released. Modeling this can be done mathematically by making some assumptions to smooth the calculation.
Let’s say every line of code has a chance of failing 0.1% of the time; that translates to a success rate of 99.9%, or 0.999. If a code release modifies ten lines of code, and every line fails at the same rate, we apply the number of lines of code as an exponent: 0.99910 = 99.0%.
With every line of code added, the development work requires more time to complete, and the chances to introduce a bug increases. At 100 lines of code (0.999100) the combined success rate diminishes to just above 90%:
Let’s look at the third factor: testing. Of course, testing will reduce the probability of an incident in code being released to production, but thorough testing also requires time. The broader and deeper a release, the more services may require testing. Invariably, the more lines of code that must be tested, the more code will go back to the engineer to correct bugs before the software can be released.
Larger releases compound the time to complete the work and finish a release both in engineering and in testing.
Larger releases not only increase the probability of an incident. In any release, the more code that has been changed, the more features or components of the product that are modified, the broader the impact of any attending failures. More features increase the potential blast radius of the issue. That means more customers experience a disruption in the services you provide.
Incidents are measured in terms of how long it takes to idetify them and restore full service. Three elements are included in this analysis:
- Time to detect (identify that something is not operating as intended) increases as size increases
- Time to isolate (identify where within the code or the environment the fault exists) similarly increases
- Time to resolve (identify and put into effect a resolution) also increases
In the worst case, an impacted customer serves as your alert that a problem exists; good software and environment monitoring can be tuned to alert your product teams proactively. Regardless of how you learn of an incident, detecting a problem is often just the tip of the iceberg.
Identifying where the problem originates takes time. The larger the release, the more features are modified, and the more complicated the analysis becomes. The more code that has been changed, the more time it takes to pinpoint the source—time that your customers are unable to use your product fully, or maybe at all.
When the source of the issue is clear, the solution often presents itself quite readily. However, with a larger release, with dependencies between features or services modified, corrective actions may be needed in multiple places. If data was impacted, it may need to be reset and reprocessed. Best practices in making code modifications can assist by ensuring that changes can be rolled back to a previous point of success.
Larger releases compound the time it takes to detect, isolate, and resolve any issues that result from the changes introduced.
Small Release Size Key Takeaways:
Smaller, more frequent releases result in higher-quality software with a lower probability of incidents. Fewer lines of code mean faster development and testing, a minimized blast radius of any incidents that do arise, and a faster time to resolve incidents because they’re easier to detect, isolate, and correct.
Smaller, more frequent releases result in higher engineering velocity, allowing the team to get more done in less time, which equals greater value in your product for your customers. Fewer lines of code mean faster development and testing, and teams can focus on the next iteration instead of supporting a larger release that requires attention.
If you have questions about applying these principles in your business, invite us in. You can find us on the web at www.akfpartners.com. We would love to help.