The Importance of Working Software

As software developers gain more experience creating software, we learn where the strategic points of the process (the "high ground") lie. This article shines a light on one of the strategic points so that you can deliver more customer value in less time and preserve trust in your software brand.

Time Efficiency is Everything

In software development, our equipment costs are fixed and insignificant. Our recurring costs are developer salaries but they are often also insignificant compared to the value of the software we're creating. What's left as a significant cost is time itself. The time it takes to discover customer value, and write software to provide it, is our critical resource. The good news is that time is free and always flowing our way. The bad news is that its rate is fixed and we can't increase it on demand. Our best recourse for getting more use out of our critical resource is to identify wasted time and to minimize it.

Expectation Gaps Waste Time

The software we're making today is incredibly complex. In order to create software, a developer must simultaneously construct an equally complex model of it in her head. Whenever she is asked to fix a problem or to add a new feature, she consults her mental model of the software before diving in to make a change. If her model matches the software well, this process is time efficient. If there's a divergence, she spends her time asking herself the question, "Why isn't this working (as I expect it to)?" What percentage of your software development time is spent asking this question? Any time spent asking this question is taken away from the more valuable question, "How can we discover more value for the customer?"

The potential for divergence increases when another programmer is expected to work on the same software. If he wasn't the one who created it, then he hasn't yet built up a mental model to match it before he can make any meaningful changes. Building his model can be incredibly time consuming and is often underestimated by managers who can see the code but aren't as aware of the invisible mental models that go along with it. You can't hand over 100 files of source code from one developer to another and expect the second developer to just read them and start working.

The potential for divergence is even greater when the next developer to work on the code isn't on the same team. Expectation gaps can be mitigated somewhat by inter-team communication. External developers don't have as much access to communicate with the original developer of the code.

How can we minimize the expectation gap?

We're not Building Software, We're Discovering It

The key to minimizing the expectation gap comes from the realization that we're not building software as much as we're discovering it. Think about the last software project you worked on. It may have taken you months to write it originally but if you had to start from scratch now and someone told you exactly which letters to type, how long would it take you to write it again? Probably no more than a day. Typing code doesn't actually take that long. It's figuring what code to type that takes the time.

Once you see software development as a discovery process, you realize the basic dance step is that of conducting experiments. Every time we add a new feature or fix a bug, we're conducting an experiment. Write the code, maybe compile it; did it work? When you ask that question you're looking at experimental results. What's crucial when conducting experiments? Having a control.

I find it helpful to think of rock climbing as an analogy. When climbing a rock wall, careful climbers take time to hammer metal spikes into the wall as they go. After hammering a spike, the climber tries a new route forward (he conducts an experiment). If the experiment fails, he falls backward but only as far as his last spike. The spike limits how much time he wastes in his route discovery process.

Automated Tests = Climbing Spikes

What are the equivalent of climbing spikes in the software world? The best analogue we have is sets of automated tests.

Software is considered working when:

  1. there's no expectation gap: it's functioning as the customer expects it to
  2. the automated tests are a good approximation of customer value
  3. all of the automated tests pass

Points #1 and #2 are difficult to measure, so point #3 is our go-to indicator. It's not always an accurate indicator of working software but it's usually good enough. It's also easy to improve incrementally—if all of the tests pass but the software doesn't meet customer expectations, then more tests can be added (within cost-effective constraints). I'm a huge fan of using Test Driven Development to help align tests with customer value.

Seize the High Ground and Keep It

These concepts are already familiar to most developers. How we apply them—the conditions we choose to tolerate and those we don't—makes all the difference in driving down waste. Many software developers I've met, and the industry as a whole, have figured out how to make software and are now in a practice of honing their understanding of the value/waste distinction developed at Toyota Motor Corporation (which spawned lean manufacturing and later Agile Software Development). This article may be an introduction or a review for you depending on where you are in the learning process.

Every Release Should be Working

Never publish a version of your software that isn't working 100%. Just don't do it. This is probably the hardest guideline for developers to follow because we're all drawn by the siren call of more features. Adding more features is fun, it feels good, and we can do it with the freedom of not having to look at the big picture. Making sure every significant feature is represented by automated tests and that all the tests pass is a more subtle emotional reward that comes with experience. It feels more like the first time you notice "Hey, not all varieties of cabernet sauvignon taste the same and I like this one!" rather than "Woo hoo! I'm buzzed on alcohol!" Making sure a release is working is not cowboy-style "look what I made it do!"; it's a careful process of crossing t's and dotting i's.

Why is this so important? Because if you ship a nonworking version of your software, you are handing the world an expectation gap that will waste the time of each user multiplied by the number of users who try it. Will people tolerate this waste and figure out how to get value from your software anyway? Sometimes they will if your software is crazy-cool or new, but every time you do this you decrease the trust in your brand.

Not long ago I was excited to use an open source library in my project. It seemed like a perfect fit. There was extensive documentation, the code looked good, and there appeared to be users. Excited, I spent three whole days (3 x 8 = 24 hours) trying to get a simple use case to work. I submitted bug reports, I re-read the documentation, I scanned the existing bug reports, and I even read the source code. Ultimately I realized that despite appearances, the software didn't work. I had fallen into an expectation gap and lost extremely valuable time. I switched to a different library which promised less but functioned exactly as I expected it to and I never looked back.

A chart of the value delivered by software to a customer should be at least roughly linear over time—the more time you spend writing the software, the more value you should be delivering to the customer:

If your code isn't working, you may still have the above chart in your mind as you're writing it, and it may be providing some kind of value to you, but the actual value delivered to the customer is zero:

If you want to make a bleeding edge version of your software available to people that has unstable and incomplete features, you can get away with that as long as there is also a working version available. Make sure that the master branch is always 100% working, no exceptions, and you can make as many unstable branches available as you want. This works only if your stable version is current enough for people to use. If it's too far behind your bleeding edge release, and people are forced to battle the expectation gap to get the features they need, then it doesn't count.

Does this feel like an unattainable goal? Are there always just a couple more features you need to complete before you can focus on stability? If so, you're not alone. The feeling is pervasive in the software industry. Microsoft project managers are famous for declaring loudly to their developers, "Shipping is a feature!" to overcome this common tendency. The solution is to deliver less, dramatically less, than you habitually do now and then begin working on the next release.

Working Releases Encourage Bug Reports

Bug reports (and feature requests) are precious feedback from your customers. They can be instrumental in the quest to discover customer value.

It takes effort to write a bug report and your users will put in the effort only if they perceive that they're getting sufficient value from the software to justify it. Imagine your software is a car. If it drives smoothly and brings its users to new places, they will be motivated to let you know if a tail light goes out or of their desire for improved fuel efficiency. The fact that the car mostly works makes it easy to isolate a defect or feature request and to describe it simply. If, on the other hand, the car can't leave the driveway, there's no incentive for users to write you a bug report. They'll assume it's obvious that the car isn't working and not take the time to let you know.

Make Sure Software is Working Before Handing it Off to Another Developer

The best way to minimize the time necessary for a second developer to construct a mental model of your software is for it to be working and have an automated test suite. Remember, to do useful work the second developer will need to conduct experiments against it and therefore will need a control. If all of the tests are passing then he has that control and he can perform an isolated experiment using 10% of the code without needing to construct a mental model for the other 90%. This is a huge win! Most developers receiving software from someone else spend most of their time a) constructing a mental model of it and b) discovering why it's not working as expected. If you give them working software with a solid automated test suite, they can start immediately spending their time building features to deliver more customer value instead.

In the worst case situation, when a professional developer inherits software that's not working and without an automated test suite, she'll often rebuild the entire project from scratch. I've seen this time and again and it actually can be the smartest thing to do because (counter-intuitively) it requires less time than grokking the non-working software. Talk about waste!

Conclusion

There are many different priorities to satisfy when writing code. In open source projects in particular, the motivation for development may be internally driven, including the joy of coding. When developers learn to orient their attention externally on making working code a priority for customers, a synergistic feedback loop is created. The customers are able to use the software and reports requests for improvements. The developer gets this precious feedback and increases the value of the software, which attracts more customers. The synergistic feedback loop is what makes the difference between projects that rocket forward in a short period of time (Docker, Node.js, etc.) vs. others which plod along for years without much evolution. Make your project a rocket. 🚀

View or Post Comments