Kevin, thanks for posting. Fantastic, thought provoking list!
The value of instrumenting the software dev workflow has certainly increased over time. Some things are mostly subjective and hard to measure, e.g. meeting load, effective documentation or AI value-add. However, today's code managers, static code analyzers, IDEs and bug tracking systems... all provide measurable statistics on a variety of things. Many of these statistics provide actionable information to help tune up the workflow to reduce latency, improve throughput and get ever closer to max performance.
There are a couple of things I might add to the list for some companies...
1) Compatibility
Best case, customers should be able to drop in new releases to replace older versions and keep going without issue. So, we might measure the rate at which the team inadvertently breaks compatibility and spikes customer feedback.
2) Release Cadence
Customers will take new releases at different rates. Does the team's product release cadence meet the majority of customer expectations? Are we releasing too often and needlessly incurring the costs for turning the full crank? Or, are we releasing to slowly to deliver hot fixes to address most customer needs.
Indeed, versioning can be a maze of twisty little passages, all alike. From my tenure working on the Java platform, we were often shackled by multiple release trains, e.g. hotfix (security/escalations), maintenance, feature...
Minware has likely seen a sizable permutation of customer versioning strategies for releases, packages, APIs, libraries and more. Would love to hear your thoughts on the pros/cons and best practices. It might even make a great topic for a future blog entry. ;)
Kevin, thanks for posting. Fantastic, thought provoking list!
The value of instrumenting the software dev workflow has certainly increased over time. Some things are mostly subjective and hard to measure, e.g. meeting load, effective documentation or AI value-add. However, today's code managers, static code analyzers, IDEs and bug tracking systems... all provide measurable statistics on a variety of things. Many of these statistics provide actionable information to help tune up the workflow to reduce latency, improve throughput and get ever closer to max performance.
There are a couple of things I might add to the list for some companies...
1) Compatibility
Best case, customers should be able to drop in new releases to replace older versions and keep going without issue. So, we might measure the rate at which the team inadvertently breaks compatibility and spikes customer feedback.
2) Release Cadence
Customers will take new releases at different rates. Does the team's product release cadence meet the majority of customer expectations? Are we releasing too often and needlessly incurring the costs for turning the full crank? Or, are we releasing to slowly to deliver hot fixes to address most customer needs.
These are good ones to add for companies that do versioned releases!
Indeed, versioning can be a maze of twisty little passages, all alike. From my tenure working on the Java platform, we were often shackled by multiple release trains, e.g. hotfix (security/escalations), maintenance, feature...
Minware has likely seen a sizable permutation of customer versioning strategies for releases, packages, APIs, libraries and more. Would love to hear your thoughts on the pros/cons and best practices. It might even make a great topic for a future blog entry. ;)
Welcome to Substack, Kevin! (and thank you to Autumn Patterson for sharing this new newsletter on LinkedIn :)
Thanks for following!