When I was handed the keys to the engineering organization at my previous company, I did what I knew: I jumped in head first and focused on the architecture and code. And I thought I was doing a great job. We successfully scaled the platform to handle a massive influx of traffic, delivered features (mostly) on time, and had several extremely satisfied enterprise customers. But there was always a sense of misalignment and distrust in our executive meetings. We struggled to agree on strategy and priority. Requirements were ambiguous and couldn’t be clarified. Delivery expectations were kept hidden and artificially padded without collaboration. We were successful, but something was preventing us from becoming a well-oiled machine.
Turns out, there was an entire other half of my job that I had been neglecting.
What I didn’t realize at the time was that while we were all speaking English, we were really speaking different languages. The business folks all knew that they should probably trust me, because my team and I had proven that we could ship features and scale the system. But there were a lot of blank stares when talking about system architecture and story points. They never really understood and appreciated what engineering was doing and the incredible skill and thought that went into it. Likewise, engineers never really got the sense of how what they were building contributed to the growth of the business. There was a major disconnect, and it fostered the “business vs engineering” tribalism that I’ve seen at too many dysfunctional companies.
The trick, as I’ve learned since then, is to find common ground. So what’s the unifying factor across all business units? Value. A VP of Sales can easily understand and appreciate a feat of engineering if the value that it provides is clear and tangible. The real challenge is: how does one use data to predict and measure the value of a new feature? A bug fix? Or, God forbid, what about a refactor or an infrastructure upgrade?
Drilling down to value
With the proper research and thought process, you can back into a fairly accurate value for almost any engineering initiative; this lets you speak about even deep back-end developments in terms of money, and makes it much easier to communicate with the non-technical side of the house.
The value of features is relatively easy to quantify, since features typically have a direct impact on your customers and thus your revenue:
- How many more customers is it estimated to bring in? → What is the average revenue per customer?
- How much longer will customers spend on the application? → How much does the likelihood of spending increase as a factor of time on the site?
- What effect will it have on the NPS score? → How does the average NPS score affect our revenue?
For bug fixes and refactors, the value is harder to quantify. There are usually more than two degrees of separation between the task and the value. I like to think of it in terms of opportunity cost:
- What’s the cost of not fixing the bug immediately? → How many customers can’t use the site? → How much revenue will we lose?
- What will happen if we don’t refactor the payment system now? → How many purchases will we be unable to process if the payment system crashes at scale?
SRE/devops is the hardest, but still doable:
- How much will the build time decrease? → How much more will engineers be able to produce in a given sprint? → How many more features can we build this quarter → How much revenue will the additional feature capacity bring?
- What will happen if we don’t have multi-AZ database redundancy? → What’s the likelihood of an AZ failure? → How much revenue will we lose if the site is down for an hour while we spin up a database in a new AZ?
Now, as you can tell, doing this with real numbers requires a lot of data. And ensuring you have the right data, accessible, structured in a usable way, at the moment that you need to make a decision, is hard. However, the majority of the value from this practice comes from the thought process, rather than the actual numbers behind it. Your COO might not understand the complexity of load testing, but will definitely understand that downtime due to scaling issues is expensive and preventable. An account executive might not understand a move from MySQL to MongoDB, but will definitely understand the benefit of being able to onboard new enterprise customers faster with a more flexible data model.
Diligently following this process not only helps with communication and alignment across the company, but naturally makes prioritization easier and helps engineers hold themselves and each other accountable.
The priority of a given initiative can now become a function of its expected value relative to the complexity of development, instead of being driven by gut-feeling, hand-waving, and industry expertise. It’s thus much easier to explain why you should refactor the authentication system before you build the Salesforce integration. People may disagree, but they’ll disagree with the value, not with your professional opinion. This is important, as it keeps things from getting personal.
It’s also a lot easier to avoid biases in decision making. I’ve been there: it’s hard to stay away from refactoring that spaghetti code service you wrote four years ago that still haunts you to this day. But if you do your research and put a value on the refactor, suddenly you may feel better about pushing it off until next year. If it works and is covered by your monitoring infrastructure, then the value of a refactor probably isn’t so high until you have a valuable feature on the books that demands it.
Finally, making the value of an engineering initiative clear to everyone helps the engineers grasp just how important what they are doing is to the company goals. Even the best engineers can be motivated to perform a typically boring task (like getting the damn front end to work on IE 10) if they understand and agree with the benefit. An excited engineering team is a high-performing one.
We’ve recently fully embraced this methodology at Fincura, and have seen very positive results so far. Our product managers (for features and bugs) and architects (for infrastructure and system improvements) do their research and estimate the relative business value (using a 1–5 scale) of an initiative. Engineers poke holes in the value estimate and apply their own estimate of development complexity using story points. And then we collaborate to break the initiative down into shippable components, optimizing for delivering the most business value with the least development complexity.
Of course, there are always dependencies, and milestone dates that can’t move, but this practice has facilitated a sense of mutual trust and accountability between business and engineering that I’ve never seen before at a software company. Everyone is aligned on priority and we can avoid costly distractions as we work toward our lofty goals.
In future posts I’ll be exploring more about how we use data to make decisions and get alignment as we strive to be that fabled well-oiled machine.