Avoiding the IT Efficiency Trap

I recently stumbled across an article by David Linthicum at GigaOm Pro talking about IT efficiency for cloud computing. In it, David discusses some of the findings from the Public Cloud Cost Benchmark Study conducted by The Big Data Group, pointing out where enterprises aren’t being as efficient as they could be with their cloud environments. Now, I’m an engineer, and nothing gets my juices flowing more than the thought of tightening down the screws on something like cloud computing and delivering better IT efficiency. But there is a trap lurking here, and we need to make sure we avoid it.

After highlighting the findings, David writes:

What this study shows, and what I find in my travels, is that cloud computing is efficient as a concept, but those who use cloud computing are not.  Or, better put, those who design cloud computing systems make some bad calls, which make the cloud-based system less cost-effective than if it were deployed in a traditional data center.

The Problems with Efficiency

First off, it’s important to note that everything David said is true. But there are two problems:

  1. David’s statement is true of every system. Nothing is fully optimized, ever. We always make “bad calls” at some level, even when we’re being careful. We can always squeeze out more inefficiency.
  2. David’s statement makes assumptions about the benchmark for optimization — the cost of deployment in a traditional data center. But is that what we’re actually trying to achieve? We need to make sure we’re focused on the right metrics before we optimize; optimizing the wrong metric can actually be counter-productive.

To expand on the first point, let’s remember that there are two primary benefits to cloud computing — agility and cost — and they are working against each other. We can always squeeze some more cost out of a system if only we spend more time working on the problem. But spending more time on the problem results in project delays and sometimes in higher complexity. At some point, the cost of optimization outweighs the loss of efficiency and we need “decide to fish or cut bait.” What may be seen as a “bad call” later might actually be a conscious choice to move on and focus on other things.

But the real trap is around the focus of our optimization efforts. Let me explain this with a little story.

A long time ago, in a galaxy far, far away, I used to work as a product manager for a network equipment startup. We had the good fortune of getting purchased by a much larger network equipment startup. As we got integrated into the larger company, a bunch of manufacturing engineers came to our new division and said, “We’ve taken a look at the bill of materials for your product and we’ve concluded that we could save $5 million over five years if we modified your designs to use these other components which are cheaper.”

I listened to the manufacturing team and then said, “Sorry, no way.”

“What?!” they sputtered. “Just by changing some components, we could drop $5 million in savings to the company’s bottom line! And that’s without any additional sales effort.”

“Yes,” I said. “I don’t doubt your calculations, but they assume a short design time and a long production run, neither of which may hold true. The problem is that if you touch any single component on that board, I’m going to have to devote design and testing resources to that effort. Even though this should be a ‘slam dunk’ project, we could run into problems and it could take us six months to get it into production. I need those resources engaged on our next product design. The product should have a useful lifetime of just two years before we introduce the next generation product, which should then represent most of the volume from that point forward. The new product line will make far more than that $5 million in the first quarter after its introduction, so if we slip at all, we’ll be even worse off. We’ll certainly consider your recommendations during the next design iteration so we can get smarter, but we aren’t going to go back to modify the existing design.”

The lesson here is that the manufacturing engineers were focused on manufacturing cost only. It turned out that they had a group goal to lower this overall metric and it was tied to a bonus. Was that a bad goal? No. They were just trying to be efficient. But as a product manager, I was focused on the overall competitiveness, revenue, and market share of the entire product line. Saving some money on the existing design was noble, but it didn’t do anything to make our product more attractive to customers. Sure, if costs were lower, I could theoretically lower my prices, but that would then vaporize the $5 million savings the manufacturing folks were holding out as the primary benefit for the effort, and we were already quite price-competitive. Customers were demanding features we didn’t have, not lower pricing, and those features could only be delivered by the next generation product.

In fact, the product line more than doubled shortly thereafter and grew to more than a $0.5 billion per year. The $5 million we might have saved by further optimizing the old product was swamped by lowering our execution risk and delivering the new product in time to hit a good market window.

The IT Efficiency Trap

If we’re not careful, we can fall into the same traps with IT efficiency. There are a couple ways this trap can manifest itself:

  1. Delays — Optimizing IT efficiency takes time. It might feel good to save money in our IT deployment, whether for cloud computing or something more traditional, but what did that delay cost your company in terms of lost revenue, lost market share, or lost customer engagement?
  2. Risk — Optimization also takes resources, and they tend to be smart resources. The optimization process is typically one of diminishing returns; it’s easy to go after the obvious, low-handing fruit, but it gets progressively harder. If your smart resources are deployed in squeezing out ever decreasing gains, they aren’t working on the next big thing, and that increases the execution risk of those projects, potentially leading to failure and cancelation.

Does this mean we shouldn’t care about IT efficiency? No, not at all. But we should always keep in mind that business agility is a major goal for the enterprise as well, allowing business units to hit fleeting market opportunities. Further, as important as IT is, the IT department will almost never be responsible for market leadership through cost cutting and IT efficiency. If IT’s goal is to help make the business more money, we need to hold business goals front and center. Then, we need to deliver the most efficient implementation we can during the time allowed. When we’re done, there will be inefficiency in the system; I can guarantee it. But we’ll have helped the corporation execute on its overall goals and we can always go back and re-engineer capabilities later if we determine that it’s really worth it.

Note that nothing in David’s article speaks against anything I just said or vice-versa. Make sure you read David’s article and implement those ideas as soon as possible. Some of them are obvious, low-hanging fruit that you should just go do. But make sure you don’t fall into the IT efficiency trap and get enamored with efficiency at the exclusion of all else. Efficiency is great, but your most important goal needs to be moving the business forward and meeting business market objectives. If you can do that and be efficient, all the better.

Speak Your Mind