Yesterday I had a chance to meet an entrepreneur, Brian York. He is the non-technical founder of Bliss.ai, a startup that measures software quality.
More specifically it measures Technical Debt.
I had heard the term before and knew what it meant in theory, and tried to understand it in practice as well, but it always fell in the “nice to know” not a must know bucket.
After meeting with Brian, I don’t think I have changed my mind, but at least I know how to measure it now.
In simple terms, technical debt is when you have to code fast and quick to deliver some capability and you take a few shortcuts – for e.g. you don’t document, you end up writing the same functions again, as opposed to searching for it and calling a library function, don’t ahere to coding standards, etc.
These shortcuts add up over time and ultimately you have to set aside time to “clean up” or pay down the debt.
Like financial debt, not all debt is 100% bad. Sometimes to deliver the functionality (features) on time, you have to ship fast and it ends up being “quick and dirty”.
Which comes back to bite you later – either by crashing, not scaling or just not working at all.
The problem is when it gets out of control and you have to pay “interest” by taking time to re-architect, re-platform or build new from scratch again in a few weeks / months or years.
What I did hear from 2 other engineers who I talked to yesterday was it is interesting but not all that important to track. It is but one measure or “metric” to measure developer effectiveness.
Bliss though, has many paying customers, who give them $60 / month, on average to track their software. They integrate with GitHub, BitBucket, etc. which are code repositories and tell you how well your teams and developers are doing based on running your code through well known static assessment frameworks.
The time you take out of the schedule to pay down technical debts, typically doesn’t result in anything the customers or users will see. This can sometimes be hard to justify.
Here is a framework to think about it from AEquilibrium.
Technical debt is the invisible parts of your code that adds negative value to your system.
How do you measure it? And why is it somewhat important?
Measurement is done largely by using static analyzers right now, as I mentioned, and you can quickly get a sense for which developers are doing commits, how many lines of code were checked in and “how much debt was incurred” relative to the lines of code and commits.
The part that’s interesting is the number of companies I know which are doing “hackathons” each quarter (over the weekend) to pare down technical debt.
Which is interesting, but if I were a developer, and I was asked to come and participate at a hackathon to do “work” over a weekend which I was going to do slowly anyway over the week, I am not sure how I’d feel about those extra 30-50 hours per quarter.
Either ways, I thought it was interesting enough to learn about and share – not the debt part, but the measurement part.
Do you measure technical debt in your company yet? And do you track and reward engineers based on that measure?
As measurement and rewards systems get more sophisticated over time, and all our jobs become more outcome based, I can easily see ways to quantify “the 10X developer” myth or the fact.
All measurement is possibly good, but measuring things that are irrelevant creates “metrics debt” in the short term.